text
stringlengths
0
316k
year
stringclasses
50 values
No
stringclasses
911 values
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 218–227, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Adapting Discriminative Reranking to Grounded Language Learning Joohyun Kim Department of Computer Science The University of Texas at Austin Austin, TX 78701, USA [email protected] Raymond J. Mooney Department of Computer Science The University of Texas at Austin Austin, TX 78701, USA [email protected] Abstract We adapt discriminative reranking to improve the performance of grounded language acquisition, specifically the task of learning to follow navigation instructions from observation. Unlike conventional reranking used in syntactic and semantic parsing, gold-standard reference trees are not naturally available in a grounded setting. Instead, we show how the weak supervision of response feedback (e.g. successful task completion) can be used as an alternative, experimentally demonstrating that its performance is comparable to training on gold-standard parse trees. 1 Introduction Grounded language acquisition involves learning to comprehend and/or generate language by simply observing its use in a naturally occurring context in which the meaning of a sentence is grounded in perception and/or action (Roy, 2002; Yu and Ballard, 2004; Gold and Scassellati, 2007; Chen et al., 2010). B¨orschinger et al. (2011) introduced an approach that reduces grounded language learning to unsupervised probabilistic context-free grammar (PCFG) induction and demonstrated its effectiveness on the task of sportscasting simulated robot soccer games. Subsequently, Kim and Mooney (2012) extended their approach to make it tractable for the more complex problem of learning to follow natural-language navigation instructions from observations of humans following such instructions in a virtual environment (Chen and Mooney, 2011). The observed sequence of actions provides very weak, ambiguous supervision for learning instructional language since there are many possible ways to describe the same execution path. Although their approach improved accuracy on the navigation task compared to the original work of Chen and Mooney (2011), it was still far from human performance. Since their system employs a generative model, discriminative reranking (Collins, 2000) could potentially improve its performance. By training a discriminative classifier that uses global features of complete parses to identify correct interpretations, a reranker can significantly improve the accuracy of a generative model. Reranking has been successfully employed to improve syntactic parsing (Collins, 2002b), semantic parsing (Lu et al., 2008; Ge and Mooney, 2006), semantic role labeling (Toutanova et al., 2005), and named entity recognition (Collins, 2002c). Standard reranking requires gold-standard interpretations (e.g. parse trees) to train the discriminative classifier. However, grounded language learning does not provide gold-standard interpretations for the training examples. Only the ambiguous perceptual context of the utterance is provided as supervision. For the navigation task, this supervision consists of the observed sequence of actions taken by a human when following an instruction. Therefore, it is impossible to directly apply conventional discriminative reranking to such problems. We show how to adapt reranking to work with such weak supervision. Instead of using gold-standard annotations to determine the correct interpretations, we simply prefer interpretations of navigation instructions that, when executed in the world, actually reach the intended destination. Additionally, we extensively revise the features typically used in parse reranking to work with the PCFG approach to grounded language learning. The rest of the paper is organized as follows: Section 2 reviews the navigation task and the PCFG approach to grounded language learning. Section 3 presents our modified approach to reranking and Section 4 describes the novel features used to evaluate parses. Section 5 experimentally evaluates the approach comparing to sev218 (a) Sample virtual world of hallways with varying tiles, wallpapers, and landmark objects indicated by letters (e.g. “H” for hat-rack) and illustrating a sample path taken by a human follower. (b) A sample natural language instruction and its formal landmarks plan for the path illustrated above. The subset corresponding to the correct formal plan is shown in bold. Figure 1: Sample virtual world and instruction. eral baselines. Finally, Section 6 describes related work, Section 7 discusses future work, and Section 8 concludes. 2 Background 2.1 Navigation Task We address the navigation learning task introduced by Chen and Mooney (2011). The goal is to interpret natural-language (NL) instructions in a virtual environment, thereby allowing a simulated robot to navigate to a specified location. Figure 1a shows a sample path executed by a human following the instruction in Figure 1b. Given no prior linguistic knowledge, the task is to learn to interpret such instructions by simply observing humans follow sample directions. Formally speaking, given training examples of the form (ei, ai, wi), where ei is an NL instruction, ai is an executed action sequence for the instruction, and wi is the initial world state, we want to learn to produce an appropriate action sequence aj given a novel (ej, wj). More specifically, one must learn a semantic parser that produces a plan pj using a formal meaning representation (MR) language introduced by Chen and Mooney (2011). This plan is then executed by a simulated robot in a virtual environment. The MARCO system, introduced by MacMahon et al. (2006), executes the formal plan, flexibly adapting to situations encountered during execution and producing the action sequence aj. During learning, Chen and Mooney construct a landmarks plan ci for each training example, which includes the complete context observed in the world-state resulting from each observed action. The correct plan, pi, (which is latent and must be inferred) is assumed to be composed from a subset of the components in the corresponding landmarks plan. The landmarks and correct plans for a sample instruction are shown in Figure 1b. 2.2 PCFG Induction for Grounded Language Learning The baseline generative model we use for reranking employs the unsupervised PCFG induction approach introduced by Kim and Mooney (2012). This model is, in turn, based on the earlier model of B¨orschinger et al. (2011), which transforms the grounded language learning into unsupervised PCFG induction. The general approach uses grammar-formulation rules which construct CFG productions that form a grammar that effectively maps NL sentences to formal meaning representations (MRs) encoded in its nonterminals. After using Expectation-Maximization (EM) to estimate the parameters for these productions using the ambiguous supervision provided by the groundedlearning setting, it produces a PCFG whose most probable parse for a sentence encodes its correct semantic interpretation. Unfortunately, the initial approach of B¨orschinger et al. (2011) produces explosively large grammars when applied to more complex problems, such as our navigation task. Therefore, Kim and Mooney enhanced their approach to use a previously learned semantic lexicon to reduce the induced grammar to a tractable size. They also altered the processes for constructing productions and mapping parse trees to MRs in order to make the construction of semantic interpretations more compositional and allow the efficient construction of more complex representa219 Figure 2: Simplified parse for the sentence “Turn left and find the sofa then turn around the corner” for Kim and Mooney’s model. Nonterminals show the MR graph, where additional nonterminals for generating NL words are omitted. tions. The resulting PCFG can be used to produce a set of most-probable interpretations of instructional sentences for the navigation task. Our proposed reranking model is used to discriminatively reorder the top parses produced by this generative model. A simplified version of a sample parse tree for Kim and Mooney’s model is shown in Figure 2. 3 Modified Reranking Algorithm In reranking, a baseline generative model is first trained and generates a set of candidate outputs for each training example. Next, a second conditional model is trained which uses global features to rescore the candidates. Reranking using an averaged perceptron (Collins, 2002a) has been successfully applied to a variety of NLP tasks. Therefore, we modify it to rerank the parse trees generated by Kim and Mooney (2012)’s model. The approach requires three subcomponents: 1) a GEN function that returns the list of top n candidate parse trees for each NL sentence produced by the generative model, 2) a feature function Φ that maps a NL sentence, e, and a parse tree, y, into a real-valued feature vector Φ(e, y) ∈Rd, and 3) a reference parse tree that is compared to the highest-scoring parse tree during training. However, grounded language learning tasks, such as our navigation task, do not provide reference parse trees for training examples. Instead, our modified model replaces the gold-standard reference parse with the “pseudo-gold” parse tree Algorithm 1 AVERAGED PERCEPTRON TRAINING WITH RESPONSE-BASED UPDATE Input: A set of training examples (ei, y∗ i ), where ei is a NL sentence and y∗ i = arg maxy∈GEN(ei) EXEC(y) Output: The parameter vector ¯W, averaged over all iterations 1...T 1: procedure PERCEPTRON 2: Initialize ¯W = 0 3: for t = 1...T, i = 1...n do 4: yi = arg maxy∈GEN(ei) Φ(ei, y) · ¯W 5: if yi ̸= y∗ i then 6: ¯W = ¯W + Φ(ei, y∗ i ) −Φ(ei, yi) 7: end if 8: end for 9: end procedure whose derived MR plan is most successful at getting to the desired goal location. Thus, the third component in our reranking model becomes an evaluation function EXEC that maps a parse tree y into a real number representing the success rate (w.r.t. successfully reaching the intended destination) of the derived MR plan m composed from y. Additionally, we improve the perceptron training algorithm by using multiple reference parses to update the weight vector ¯W. Although we determine the pseudo-gold reference tree to be the candidate parse y∗such that y∗ = arg maxy∈GEN(e) EXEC(y), it may not actually be the correct parse for the sentence. Other parses may contain useful information for learning, and therefore we devise a way to update weights using all candidate parses whose successful execution rate is greater than the parse preferred by the currently learned model. 3.1 Response-Based Weight Updates To circumvent the need for gold-standard reference parses, we select a pseudo-gold parse from the candidates produced by the GEN function. In a similar vein, when reranking semantic parses, Ge and Mooney (2006) chose as a reference parse the one which was most similar to the gold-standard semantic annotation. However, in the navigation task, the ultimate goal is to generate a plan that, when actually executed in the virtual environment, leads to the desired destination. Therefore, the pseudo-gold reference is chosen as the candidate parse that produces the MR plan with the great220 est execution success. This requires an external module that evaluates the execution accuracy of the candidate parses. For the navigation task, we use the MARCO (MacMahon et al., 2006) execution module, which is also used to evaluate how well the overall system learns to follow directions (Chen and Mooney, 2011). Since MARCO is nondeterministic when executing underspecified plans, we execute each candidate plan 10 times, and its execution rate is the percentage of trials in which it reaches the correct destination. When there are multiple candidate parses tied for the highest execution rate, the one assigned the largest probability by the baseline model is selected. Our modified averaged perceptron procedure with such a response-based update is shown in Algorithm 1. One additional issue must be addressed when computing the output of the GEN function. The final plan MRs are produced from parse trees using compositional semantics (see Kim and Mooney (2012) for details). Consequently, the n-best parse trees for the baseline model do not necessarily produce the n-best distinct plans, since many parses can produce the same plan. Therefore, we adapt the GEN function to produce the n best distinct plans rather than the n best parses. This may require examining many more than the n best parses, because many parses have insignificant differences that do not affect the final plan. The score assigned to a plan is the probability of the most probable parse that generates that plan. In order to efficiently compute the n best plans, we modify the exact n-best parsing algorithm developed by Huang and Chiang (2005). The modified algorithm ensures that each plan in the computed n best list produces a new distinct plan. 3.2 Weight Updates Using Multiple Parses Typically, when used for reranking, the averaged perceptron updates its weights using the featurevector difference between the current best predicted candidate and the gold-standard reference (line 6 in Algorithm 1). In our initial modified version, we replaced the gold-standard reference parse with the pseudo-gold reference, which has the highest execution rate amongst all candidate parses. However, this ignores all other candidate parses during perceptron training. However, it is not ideal to regard other candidate parses as “useless.” There may be multiple candidate parses with the same maximum execution rate, and even candidates with lower execution rates could represent the correct plan for the instruction given the weak, indirect supervision provided by the observed sequence of human actions. Therefore, we also consider a further modification of the averaged perceptron algorithm which updates its weights using multiple candidate parses. Instead of only updating the weights with the single difference between the predicted and pseudo-gold parses, the weight vector ¯W is updated with the sum of feature-vector differences between the current predicted candidate and all other candidates that have a higher execution rate. Formally, in this version, we replace lines 5–6 of Algorithm 1 with: 1: for all y ∈GEN(ei) where y ̸= yi and EXEC(y) > EXEC(yi) do 2: ¯W = ¯W + (EXEC(y) −EXEC(yi)) ×(Φ(ei, y) −Φ(ei, yi)) 3: end for where EXEC(y) is the execution rate of the MR plan m derived from parse tree y. In the experiments below, we demonstrate that, by exploiting multiple reference parses, this new update rule increases the execution accuracy of the final system. Intuitively, this approach gathers additional information from all candidate parses with higher execution accuracy when learning the discriminative reranker. In addition, as shown in line 2 of the algorithm above, it uses the difference in execution rates between a candidate and the currently preferred parse to weight the update to the parameters for that candidate. This allows more effective plans to have a larger impact on the learned model in each iteration. 4 Reranking Features This section describes the features Φ extracted from parses produced by the generative model and used to rerank the candidates. 4.1 Base Features The base features adapt those used in previous reranking methods, specifically those of Collins (2002a), Lu et al. (2008), and Ge and Mooney (2006), which are directly extracted from parse trees. In addition, we also include the log probability of the parse tree as an additional feature. Figure 3 shows a sample full parse tree from our baseline model, which is used when explaining the 221 L1: Turn(LEFT), Verify(front : SOFA, back : EASEL), Travel(steps : 2), Verify(at : SOFA), Turn(RIGHT) L6: Turn() PhraseL6 WordL6 corner PhXL6 Word∅ the PhXL6 WordL6 around PhXL6 WordL6 turn PhXL6 Word∅ then L3: Travel(steps : 2), Verify(at : SOFA), Turn(RIGHT) L5: Travel(), Verify(at : SOFA) PhraseL5 WordL5 sofa PhXL5 Word∅ the PhXL5 WordL5 find L2: Turn(LEFT), Verify(front : SOFA) L4: Turn(LEFT) PhraseL4 Word∅ and PhL4 WordL4 left PhXL4 WordL4 Turn Figure 3: Sample full parse tree for the sentence “Turn left and find the soft then turn around the corner” used to explain reranking features. Nonterminals representing MR plan components are shown, which are labeled L1 to L6 for ease of reference. Additional nonterminals such as Phrase, Ph, PhX, and Word are subsidiary ones for generating NL words from MR nonterminals. They are also shown in order to represent the entire process of how parse trees are constructed (for details, refer to Kim and Mooney (2012)). reranking features below, each illustrated by an example. a) PCFG Rule. Indicates whether a particular PCFG rule is used in the parse tree: f(L1 ⇒ L2L3) = 1. b) Grandparent PCFG Rule. Indicates whether a particular PCFG rule as well as the nonterminal above it is used in the parse tree: f(L3 ⇒L5L6|L1) = 1. c) Long-range Unigram. Indicates whether a nonterminal has a given NL word below it in the parse tree: f(L2 ; left) = 1 and f(L4 ; turn) = 1. d) Two-level Long-range Unigram. Indicates whether a nonterminal has a child nonterminal which eventually generates a NL word in the parse tree: f(L4 ; left|L2) = 1 e) Unigram. Indicates whether a nonterminal produces a given child nonterminal or terminal NL word in the parse tree: f(L1 →L2) = 1 and f(L1 →L3) = 1. f) Grandparent Unigram. Indicates whether a nonterminal has a given child nonterminal/terminal below it, as well as a given parent nonterminal: f(L2 →L4|L1) = 1 g) Bigram. Indicates whether a given bigram of nonterminal/terminals occurs for given a parent nonterminal: f(L1 →L2 : L3) = 1. h) Grandparent Bigram. Same as Bigram, but also includes the nonterminal above the parent nonterminal: f(L3 →L5 : L6|L1) = 1. i) Log-probability of Parse Tree. Certainty assigned by the base generative model. 4.2 Predicate-Only Features The base features above generally include nonterminal symbols used in the parse tree. In the grounded PCFG model, nonterminals are named after components of the semantic representations (MRs), which are complex and numerous. There are ≃2,500 nonterminals in the grammar constructed for the navigation data, most of which are very specific and rare. This results in a very large, sparse feature space which can easily lead 222 the reranking model to over-fit the training data and prevent it from generalizing properly. Therefore, we also tried constructing more general features that are less sparse. First, we construct generalized versions of the base features in which nonterminal symbols use only predicate names and omit their arguments. In the navigation task, action arguments frequently contain redundant, rarely used information. In particular, the interleaving verification steps frequently include many details that are never actually mentioned in the NL instructions. For instance, a nonterminal for the MR Turn(LEFT), Verify(at:SOFA,front:EASEL), Travel(steps:3) is transformed into the predicate-only form Turn(), Verify(), Travel() , and then used to construct more general versions of the base features described in the previous section. Second, another version of the base features are constructed in which nonterminal symbols include action arguments but omit all interleaving verification steps. This is a somewhat more conservative simplification of the nonterminal symbols. Although verification steps sometimes help interpret the actions and their surrounding context, they frequently cause the nonterminal symbols to become unnecessarily complex and specific. 4.3 Descended Action Features Finally, another feature group which we utilize captures whether a particular atomic action in a nonterminal “descends” into one of its child nonterminals or not. An atomic action consists of a predicate and its arguments, e.g. Turn(LEFT), Travel(steps:2), or Verify(at:SOFA). When an atomic action descends into lower nonterminals in a parse tree, it indicates that it is mentioned in the NL instruction and is therefore important. Below are several feature types related to descended actions that are used in our reranking model: a) Descended Action. Indicates whether a given atomic action in a nonterminal descends to the next level. In Figure 3, f(Turn(LEFT)) = 1 since it descends into L2 and L4. b) Descended Action Unigram. Same as Descended Action, but also includes the current nonterminal: f(Turn(LEFT)|L1) = 1. c) Grandparent Descended Action Unigram. Same as Descended Action Unigram, but additionally includes the parent nonterminal as well as the current one: f(Turn(LEFT)|L2, L1) = 1. d) Long-range Descended Action Unigram. Indicates whether a given atomic action in a nonterminal descends to a child nonterminal and this child generates a given NL word below it: f(Turn(LEFT) ; left) = 1 5 Experimental Evaluation 5.1 Data and Methodology The navigation data was collected by MacMahon et al. (2006), and includes English instructions and human follower data.1 The data contains 706 route instructions for three virtual worlds. The instructions were produced by six instructors for 126 unique starting and ending location pairs over the three maps. Each instruction is annotated with 1 to 15 human follower traces with an average of 10.4 actions per instruction. Each instruction contains an average of 5.0 sentences each with an average of 7.8 words. Chen and Mooney (2011) constructed a version of the data in which each sentence is annotated with the actions taken by the majority of followers when responding to this sentence. This single-sentence version is used for training. Manually annotated “gold standard” formal plans for each sentence are used for evaluation purposes only. We followed the same experimental methodology as Kim and Mooney (2012) and Chen and Mooney (2011). We performed “leave one environment out” cross-validation, i.e. 3 trials of training on two environments and testing on the third. The baseline model is first trained on data for two environments and then used to generate the n = 50 best plans for both training and testing instructions. As mentioned in Section 3.1, we need to generate many more top parse trees to get 50 distinct formal MR plans. We limit the number of best parse trees to 1,000,000, and even with this high limit, some training examples were left with less than 50 distinct plans.2 Each candidate 1Data is available at http://www.cs.utexas. edu/users/ml/clamp/navigation/ 29.6% of the examples (310 out of total 3237) produced less than 50 distinct MR plans in the evaluation. This was mostly due to exceeding the parse-tree limit and partly because the baseline model failed to parse some NL sentences. 223 n 1 2 5 10 25 50 Parse Accuracy F1 74.81 79.08 82.78 85.32 87.52 88.62 Plan Execution Single-sentence 57.22 63.86 70.93 76.41 83.59 87.02 Paragraph 20.17 28.08 35.34 40.64 48.69 53.66 Table 1: Oracle parse and execution accuracy for single sentence and complete paragraph instructions for the n best parses. plan is then executed using MARCO and its rate of successfully reaching the goal is recorded. Our reranking model is then trained on the training data using the n-best candidate parses. We only retain reranking features that appear (i.e. have a value of 1) at least twice in the training data. Finally, we measure both parse and execution accuracy on the test data. Parse accuracy evaluates how well a system maps novel NL sentences for new environments into correct MR plans (Chen and Mooney, 2011). It is calculated by comparing the system’s MR output to the gold-standard MR. Accuracy is measured using F1, the harmonic mean of precision and recall for individual MR constituents, thereby giving partial credit to approximately correct MRs. We then execute the resulting MR plans in the test environment to see whether they successfully reach the desired destinations. Execution is evaluated both for single sentence and complete paragraph instructions. Successful execution rates are calculated by averaging 10 nondeterministic MARCO executions. 5.2 Reranking Results Oracle results As typical in reranking experiments, we first present results for an “oracle” that always returns the best result amongst the top-n candidates produced by the baseline system, thereby providing an upper bound on the improvements possible with reranking. Table 1 shows oracle accuracy for both semantic parsing and plan execution for single sentence and complete paragraph instructions for various values of n. For oracle parse accuracy, for each sentence, we pick the parse that gives the highest F1 score. For oracle single-sentence execution accuracy, we pick the parse that gives the highest execution success rate. These singlesentence plans are then concatenated to produce a complete plan for each paragraph instruction in order to measure overall execution accuracy. Since making an error in any of the sentences in an instruction can easily lead to the wrong final destination, paragraph-level accuracies are always much lower than sentence-level ones. In order to balance oracle accuracy and the computational effort required to produce n distinct plans, we chose n = 50 for the final experiments since oracle performance begins to asymptote at this point. Response-based vs. gold-standard reference weight updates Table 2 presents reranking results for our proposed response-based weight update (Single) for the averaged perceptron (cf. Section 3.1) compared to the typical weight update method using goldstandard parses (Gold). Since the gold-standard annotation gives the correct MR rather than a parse tree for each sentence, Gold selects as a single reference parse the candidate in the top 50 whose resulting MR is most similar to the gold-standard MR as determined by its parse accuracy. Ge and Mooney (2006) employ a similar approach when reranking semantic parses. The results show that our response-based approach (Single) has better execution accuracy than both the baseline and the standard approach using gold-standard parses (Gold). However, Gold does perform best on parse accuracy since it explicitly focuses on maximizing the accuracy of the resulting MR. In contrast, by focusing discriminative training on optimizing performance of the ultimate end task, our response-based approach actually outperforms the traditional approach on the final task. In addition, it only utilizes feedback that is naturally available for the task, rather than requiring an expert to laboriously annotate each sentence with a gold-standard MR. Even though Gold captures more elements of the gold-standard MRs, it may miss some critical MR components that are crucial to the final navigation task. The overall result is very promising because it demonstrates how reranking can be applied to grounded language learning tasks where gold-standard parses are not readily available. 224 Parse Acc Plan Execution F1 Single Para Baseline 74.81 57.22 20.17 Gold 78.26 52.57 19.33 Single 73.32 59.65 22.62 Multi 73.43 62.81 26.57 Table 2: Reranking results comparing our response-based methods using single (Single) or multiple (Multi) pseudo-gold parses to the standard approach using a single gold-standard parse (Gold). Baseline refers to Kim and Mooney (2012)’s system. Reranking results use all features described in Section 4. “Single“ means the single-sentence version and “Para” means the full paragraph version of the corpus. Weight update with single vs. multiple reference parses Table 2 also shows performance when using multiple reference parse trees to update weights (cf. Section 3.2). Using multiple parses (Multi) clearly performs better for all evaluation metrics, particularly execution. As explained in Section 3.2, the single-best pseudo-gold parse provides weak, ambiguous feedback since it only provides a rough estimate of the response feedback from the execution module. Using a variety of preferable parses to update weights provides a greater amount and variety of weak feedback and therefore leads to a more accurate model.3 Comparison of different feature groups Table 3 compares reranking results using the different feature groups described in Section 4. Compared to the baseline model (Kim and Mooney, 2012), each of the feature groups Base (base features), Pred (predicate-only and verificationremoved features), and Desc (descended action features) helps improve the performance of plan execution for both single sentence and complete paragraph navigation instructions. Among them, Desc is the most effective group of features. Combinations of the feature groups helps fur3We also tried extending Gold to use multiple reference parses in the same manner, but this actually degraded its performance for all metrics. This indicates that, unlike Multi, parses other than the best one do not have useful information in terms of optimizing normal parse accuracy. Instead, additional parses seem to add noise to the training process in this case. Therefore, updating with multiple parses does not appear to be useful in standard reranking. Features Parse Acc Plan Execution F1 Single Para Baseline 74.81 57.22 20.17 Base 71.50 60.09 23.20 Pred 71.61 60.87 24.13 Desc 73.90 61.33 25.00 Base+Pred 69.52 61.49 26.24 Base+Desc 73.66 61.72 25.58 Pred+Desc 72.56 62.36 26.04 All 73.43 62.81 26.57 Table 3: Reranking results comparing different sets of features. Base refers to base features (cf. Section 4.1), Pred refers to predicate-only features and also includes features based on removing interleaving verification steps (cf. Section 4.2), Desc refers to descended action features (cf. Section 4.3). All refers to all the features including Base, Pred, and Desc. All results use weight update with multiple reference parses (cf. Section 3.2). ther improve the plan execution performance, and reranking using all of the feature groups (All) performs the best, as expected. However, since our model is optimizing plan execution during training, the results for parse accuracy are always worse than the baseline model. 6 Related Work Discriminative reranking is a common machine learning technique to improve the output of generative models. It has been shown to be effective for various natural language processing tasks including syntactic parsing (Collins, 2000; Collins, 2002b; Collins and Koo, 2005; Charniak and Johnson, 2005; Huang, 2008), semantic parsing (Lu et al., 2008; Ge and Mooney, 2006), partof-speech tagging (Collins, 2002a), semantic role labeling (Toutanova et al., 2005), named entity recognition (Collins, 2002c). machine translation (Shen et al., 2004; Fraser and Marcu, 2006) and surface realization in generation (White and Rajkumar, 2009; Konstas and Lapata, 2012). However, to our knowledge, there has been no previous attempt to apply discriminative reranking to grounded language acquisition, where goldstandard reference parses are not typically available for training reranking models. Our use of response-based training is similar 225 to work on learning semantic parsers from execution output such as the answers to database queries (Clarke et al., 2010; Liang et al., 2011). Although the demands of grounded language tasks, such as following navigation instructions, are different, it would be interesting to try adapting these alternative approaches to such problems. 7 Future Work In the future, we would like to explore the construction of better, more-general reranking features that are less prone to over-fitting. Since typical reranking features rely on the combination and/or modification of nonterminals appearing in parse trees, for the large PCFG’s produced for grounded language learning, such features are very sparse and rare. Although the current features provide a significant increase in performance, oracle results imply that an even larger benefit may be achievable. In addition, employing other reranking methodologies, such as kernel methods (Collins, 2002b), and forest reranking exploiting a packed forest of exponentially many parse trees (Huang, 2008), is another area of future work. We also would like to apply our approach to other reranking algorithms such as SVMs (Joachims, 2002) and MaxEnt methods (Charniak and Johnson, 2005). 8 Conclusions In this paper, we have shown how to adapt discriminative reranking to grounded language learning. Since typical grounded language learning problems, such as navigation instruction following, do not provide the gold-standard reference parses required by standard reranking models, we have devised a novel method for using the weaker supervision provided by response feedback (e.g. the execution of inferred navigation plans) when training a perceptron-based reranker. This approach was shown to be very effective compared to the traditional method of using gold-standard parses. In addition, since this response-based supervision is weak and ambiguous, we have also presented a method for using multiple reference parses to perform perceptron weight updates and shown a clear further improvement in end-task performance with this approach. Acknowledgments We thank anonymous reviewers for their helpful comments to improve this paper. This work was funded by the NSF grant IIS-0712907 and IIS1016312. Experiments were performed on the Mastodon Cluster, provided by NSF Grant EIA0303609. References Benjamin B¨orschinger, Bevan K. Jones, and Mark Johnson. 2011. Reducing grounded learning tasks to grammatical inference. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ’11, pages 1416–1425, Stroudsburg, PA, USA. Association for Computational Linguistics. Eugene Charniak and Mark Johnson. 2005. Coarseto-fine n-best parsing and maxent discriminative reranking. In Proceedings of the 43nd Annual Meeting of the Association for Computational Linguistics (ACL-05), pages 173–180, Ann Arbor, MI, June. David L. Chen and Raymond J. Mooney. 2011. Learning to interpret natural language navigation instructions from observations. In Proceedings of the 25th AAAI Conference on Artificial Intelligence (AAAI2011), San Francisco, CA, USA, August. David L. Chen, Joohyun Kim, and Raymond J. Mooney. 2010. Training a multilingual sportscaster: Using perceptual context to learn language. Journal of Artificial Intelligence Research, 37:397–435. James Clarke, Dan Goldwasser, Ming-Wei Chang, and Dan Roth. 2010. Driving semantic parsing from the world’s response. In Proceedings of the Fourteenth Conference on Computational Natural Language Learning (CoNLL-2010), pages 18–27, Uppsala, Sweden, July. Association for Computational Linguistics. Michael Collins and Terry Koo. 2005. Discriminative reranking for natural language parsing. Computational Linguistics, 31(1):25–69. Michael Collins. 2000. Discriminative reranking for natural language parsing. In Proceedings of the Seventeenth International Conference on Machine Learning (ICML-2000), pages 175–182, Stanford, CA, June. Michael Collins. 2002a. Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP-02), Philadelphia, PA, July. Michael Collins. 2002b. New ranking algorithms for parsing and tagging: Kernels over discrete structures, and the voted perceptron. In Proceedings of 226 the 40th Annual Meeting of the Association for Computational Linguistics (ACL-2002), pages 263–270, Philadelphia, PA, July. Michael Collins. 2002c. Ranking algorithms for named-entity extraction: Boosting and the voted perceptron. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL-2002), pages 489–496, Philadelphia, PA. Alexander Fraser and Daniel Marcu. 2006. Semisupervised training for statistical word alignment. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics (ACL-06), pages 769–776, Stroudsburg, PA, USA. Association for Computational Linguistics. R. Ge and R. J. Mooney. 2006. Discriminative reranking for semantic parsing. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics (COLING/ACL06), Sydney, Australia, July. Kevin Gold and Brian Scassellati. 2007. A robot that uses existing vocabulary to infer non-visual word meanings from observation. In Proceedings of the 22nd national conference on Artificial intelligence Volume 1, AAAI’07, pages 883–888. AAAI Press. Liang Huang and David Chiang. 2005. Better kbest parsing. In Proceedings of the Ninth International Workshop on Parsing Technology, Parsing ’05, pages 53–64, Stroudsburg, PA, USA. Association for Computational Linguistics. Liang Huang. 2008. Forest reranking: Discriminative parsing with non-local features. In Proceedings of ACL-08: HLT, pages 586–594, Columbus, Ohio, June. Association for Computational Linguistics. Thorsten Joachims. 2002. Optimizing search engines using clickthrough data. In Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD2002), Edmonton, Canada. Joohyun Kim and Raymond J. Mooney. 2012. Unsupervised PCFG induction for grounded language learning with highly ambiguous supervision. In Proceedings of the Conference on Empirical Methods in Natural Language Processing and Natural Language Learning, EMNLP-CoNLL ’12. Ioannis Konstas and Mirella Lapata. 2012. Conceptto-text generation via discriminative reranking. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers - Volume 1, ACL ’12, pages 369–378, Stroudsburg, PA, USA. Association for Computational Linguistics. Percy Liang, Michael I. Jordan, and Dan Klein. 2011. Learning dependency-based compositional semantics. In Proceedings of ACL, Portland, Oregon, June. Association for Computational Linguistics. Wei Lu, Hwee Tou Ng, Wee Sun Lee, and Luke S. Zettlemoyer. 2008. A generative model for parsing natural language to meaning representations. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing (EMNLP08), Honolulu, HI, October. Matt MacMahon, Brian Stankiewicz, and Benjamin Kuipers. 2006. Walk the talk: connecting language, knowledge, and action in route instructions. In proceedings of the 21st national conference on Artificial intelligence - Volume 2, AAAI’06, pages 1475– 1482. AAAI Press. Deb Roy. 2002. Learning visually grounded words and syntax for a scene description task. Computer Speech and Language, 16(3):353–385. Libin Shen, Anoop Sarkar, and Franz Josef Och. 2004. Discriminative reranking for machine translation. In Daniel Marcu Susan Dumais and Salim Roukos, editors, HLT-NAACL 2004: Main Proceedings, pages 177–184, Boston, Massachusetts, USA, May 2 May 7. Association for Computational Linguistics. Kristina Toutanova, Aria Haghighi, and Christopher D. Manning. 2005. Joint learning improves semantic role labeling. In Proceedings of the 43nd Annual Meeting of the Association for Computational Linguistics (ACL-05), pages 589–596, Ann Arbor, MI, June. Michael White and Rajakrishnan Rajkumar. 2009. Perceptron reranking for CCG realization. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1 Volume 1, EMNLP ’09, pages 410–419, Stroudsburg, PA, USA. Association for Computational Linguistics. Chen Yu and Dana H. Ballard. 2004. On the integration of grounding language and learning objects. In Proceedings of the Nineteenth National Conference on Artificial Intelligence (AAAI-04), pages 488–493. 227
2013
22
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 228–238, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Universal Conceptual Cognitive Annotation (UCCA) Omri Abend∗ Institute of Computer Science The Hebrew University [email protected] Ari Rappoport Institute of Computer Science The Hebrew University [email protected] Abstract Syntactic structures, by their nature, reflect first and foremost the formal constructions used for expressing meanings. This renders them sensitive to formal variation both within and across languages, and limits their value to semantic applications. We present UCCA, a novel multi-layered framework for semantic representation that aims to accommodate the semantic distinctions expressed through linguistic utterances. We demonstrate UCCA’s portability across domains and languages, and its relative insensitivity to meaning-preserving syntactic variation. We also show that UCCA can be effectively and quickly learned by annotators with no linguistic background, and describe the compilation of a UCCAannotated corpus. 1 Introduction Syntactic structures are mainly committed to representing the formal patterns of a language, and only indirectly reflect semantic distinctions. For instance, while virtually all syntactic annotation schemes are sensitive to the structural difference between (a) “John took a shower” and (b) “John showered”, they seldom distinguish between (a) and the markedly different (c) “John took my book”. In fact, the annotations of (a) and (c) are identical under the most widely-used schemes for English, the Penn Treebank (PTB) (Marcus et al., 1993) and CoNLL-style dependencies (Surdeanu et al., 2008) (see Figure 1). ∗Omri Abend is grateful to the Azrieli Foundation for the award of an Azrieli Fellowship. Underscoring the semantic similarity between (a) and (b) can assist semantic applications. One example is machine translation to target languages that do not express this structural distinction (e.g., both (a) and (b) would be translated to the same German sentence “John duschte”). Question Answering applications can also benefit from distinguishing between (a) and (c), as this knowledge would help them recognize “my book” as a much more plausible answer than “a shower” to the question “what did John take?”. This paper presents a novel approach to grammatical representation that annotates semantic distinctions and aims to abstract away from specific syntactic constructions. We call our approach Universal Conceptual Cognitive Annotation (UCCA). The word “cognitive” refers to the type of categories UCCA uses and its theoretical underpinnings, and “conceptual” stands in contrast to “syntactic”. The word “universal” refers to UCCA’s capability to accommodate a highly rich set of semantic distinctions, and its aim to ultimately provide all the necessary semantic information for learning grammar. In order to accommodate this rich set of distinctions, UCCA is built as a multilayered structure, which allows for its open-ended extension. This paper focuses on the foundational layer of UCCA, a coarse-grained layer that represents some of the most important relations expressed through linguistic utterances, including argument structure of verbs, nouns and adjectives, and the inter-relations between them (Section 2). UCCA is supported by extensive typological cross-linguistic evidence and accords with the leading Cognitive Linguistics theories. We build primarily on Basic Linguistic Theory (BLT) (Dixon, 2005; 2010a; 2010b; 2012), a typological approach to grammar successfully used for the de228 scription of a wide variety of languages. BLT uses semantic similarity as its main criterion for categorizing constructions both within and across languages. UCCA takes a similar approach, thereby creating a set of distinctions that is motivated cross-linguistically. We demonstrate UCCA’s relative insensitivity to paraphrasing and to crosslinguistic variation in Section 4. UCCA is exceptional in (1) being a semantic scheme that abstracts away from specific syntactic forms and is not defined relative to a specific domain or language, (2) providing a coarse-grained representation which allows for open-ended extension, and (3) using cognitively-motivated categories. An extensive comparison of UCCA to existing approaches to syntactic and semantic representation, focusing on the major resources available for English, is found in Section 5. This paper also describes the compilation of a UCCA-annotated corpus. We provide a quantitative assessment of the annotation quality. Our results show a quick learning curve and no substantial difference in the performance of annotators with and without background in linguistics. This is an advantage of UCCA over its syntactic counterparts that usually need annotators with extensive background in linguistics (see Section 3). We note that UCCA’s approach that advocates automatic learning of syntax from semantic supervision stands in contrast to the traditional view of generative grammar (Clark and Lappin, 2010). 2 The UCCA Scheme 2.1 The Formalism UCCA uses directed acyclic graphs (DAGs) to represent its semantic structures. The atomic meaning-bearing units are placed at the leaves of the DAG and are called terminals. In the foundational layer, terminals are words and multi-word chunks, although this definition can be extended to include arbitrary morphemes. The nodes of the graph are called units. A unit may be either (i) a terminal or (ii) several elements jointly viewed as a single entity according to some semantic or cognitive consideration. In many cases, a non-terminal unit is comprised of a single relation and the units it applies to (its arguments), although in some cases it may also contain secondary relations. Hierarchy is formed by using units as arguments or relations in other units. Categories are annotated over the graph’s edges, and represent the descendant unit’s role in forming the semantics of the parent unit. Therefore, the internal structure of a unit is represented by its outbound edges and their categories, while the roles a unit plays in the relations it participates in are represented by its inbound edges. We note that UCCA’s structures reflect a single interpretation of the text. Several discretely different interpretations (e.g., high vs. low PP attachments) may therefore yield several different UCCA annotations. UCCA is a multi-layered formalism, where each layer specifies the relations it encodes. The question of which relations will be annotated (equivalently, which units will be formed) is determined by the layer in question. For example, consider “John kicked his ball”, and assume our current layer encodes the relations expressed by “kicked” and by “his”. In that case, the unit “his” has a single argument1 (“ball”), while “kicked” has two (“John” and “his ball”). Therefore, the units of the sentence are the terminals (which are always units), “his ball” and “John kicked his ball”. The latter two are units by virtue of expressing a relation along with its arguments. See Figure 2(a) for a graph representation of this example. For a brief comparison of the UCCA formalism with other dependency annotations see Section 5. 2.2 The UCCA Foundational Layer The foundational layer is designed to cover the entire text, so that each word participates in at least one unit. It focuses on argument structures of verbal, nominal and adjectival predicates and the inter-relations between them. Argument structure phenomena are considered basic by many approaches to semantic and grammatical representation, and have a high applicative value, as demonstrated by their extensive use in NLP. The foundational layer views the text as a collection of Scenes. A Scene can describe some movement or action, or a temporally persistent state. It generally has a temporal and a spatial dimension, which can be specific to a particular time and place, but can also describe a schematized event which refers to many events by highlighting a common meaning component. For example, the Scene “John loves bananas” is a schematized event, which refers to John’s disposition towards bananas without making any temporal or spatial 1The anaphoric aspects of “his” are not considered part of the current layer (see Section 2.3). 229 John took a shower -ROOTROOT SBJ OBJ NMOD (a) John showered -ROOTROOT SBJ (b) John took my book -ROOTROOT SBJ OBJ NMOD (c) Figure 1: CoNLL-style dependency annotations. Note that (a) and (c), which have different semantics but superficially similar syntax, have the same annotation. Abb. Category Short Definition Scene Elements P Process The main relation of a Scene that evolves in time (usually an action or movement). S State The main relation of a Scene that does not evolve in time. A Participant A participant in a Scene in a broad sense (including locations, abstract entities and Scenes serving as arguments). D Adverbial A secondary relation in a Scene (including temporal relations). Elements of Non-Scene Units C Center Necessary for the conceptualization of the parent unit. E Elaborator A non-Scene relation which applies to a single Center. N Connector A non-Scene relation which applies to two or more Centers, highlighting a common feature. R Relator All other types of non-Scene relations. Two varieties: (1) Rs that relate a C to some super-ordinate relation, and (2) Rs that relate two Cs pertaining to different aspects of the parent unit. Inter-Scene Relations H Parallel Scene A Scene linked to other Scenes by regular linkage (e.g., temporal, logical, purposive). L Linker A relation between two or more Hs (e.g., “when”, “if”, “in order to”). G Ground A relation between the speech event and the uttered Scene (e.g., “surprisingly”, “in my opinion”). Other F Function Does not introduce a relation or participant. Required by the structural pattern it appears in. Table 1: The complete set of categories in UCCA’s foundational layer. specifications. The definition of a Scene is motivated cross-linguistically and is similar to the semantic aspect of the definition of a “clause” in Basic Linguistic Theory2. Table 1 provides a concise description of the categories used by the foundational layer3. We turn to a brief description of them. Simple Scenes. Every Scene contains one main relation, which is the anchor of the Scene, the most important relation it describes (similar to frameevoking lexical units in FrameNet (Baker et al., 1998)). We distinguish between static Scenes, that describe a temporally persistent state, and processual Scenes that describe a temporally evolving event, usually a movement or an action. The main relation receives the category State (S) in static and Process (P) in processual Scenes. We note that the S-P distinction is introduced here mostly for practical purposes, and that both categories can be viewed as sub-categories of the more abstract category Main Relation. A Scene contains one or more Participants (A). 2As UCCA annotates categories on its edges, Scene nodes bear no special indication. They can be identified by examining the labels on their outgoing edges (see below). 3Repeated here with minor changes from (Abend and Rappoport, 2013), which focuses on the categories themselves. This category subsumes concrete and abstract participants as well as embedded Scenes (see below). Scenes may also contain secondary relations, which are marked as Adverbials (D). The above categories are indifferent to the syntactic category of the Scene-evoking unit, be it a verb, a noun, an adjective or a preposition. For instance, in the Scene “The book is in the garden”, “is in” is the S, while “the book” and “the garden” are As. In “Tomatoes are red”, the main static relation is “are red”, while “Tomatoes” is an A. The foundational layer designates a separate set of categories to units that do not evoke a Scene. Centers (C) are the sub-units of a non-Scene unit that are necessary for the unit to be conceptualized and determine its semantic type. There can be one or more Cs in a non-Scene unit4. Other sub-units of non-Scene units are categorized into three types. First, units that apply to a single C are annotated as Elaborators (E). For instance, “big” in “big dogs” is an E, while “dogs” is a C. We also mark determiners as Es in this coarsegrained layer5. Second, relations that relate two or 4By allowing several Cs we avoid the difficulties incurred by the common single head assumption. In some cases the Cs are inferred from context and can be implicit. 5Several Es that apply to a single C are often placed in 230 more Cs, highlighting a common feature or role (usually coordination), are called Connectors (N). See an example in Figure 2(b). Relators (R) cover all other types of relations between two or more Cs. Rs appear in two main varieties. In one, Rs relate a single entity to a super-ordinate relation. For instance, in “I heard noise in the kitchen”, “in” relates “the kitchen” to the Scene it is situated in. In the other, Rs relate two units pertaining to different aspects of the same entity. For instance, in “bottom of the sea”, “of” relates “bottom” and “the sea”, two units that refer to different aspects of the same entity. Some units do not introduce a new relation or entity into the Scene, and are only part of the formal pattern in which they are situated. Such units are marked as Functions (F). For example, in the sentence “it is customary for John to come late”, the “it” does not refer to any specific entity or relation and is therefore an F. Two example annotations of simple Scenes are given in Figure 2(a) and Figure 2(b). More complex cases. UCCA allows units to participate in more than one relation. This is a natural requirement given the wealth of distinctions UCCA is designed to accommodate. Already in the foundational layer of UCCA, the need arises for multiple parents. For instance, in “John asked Mary to join him”, “Mary” is a Participant of both the “asking” and the “joining” Scenes. In some cases, an entity or relation is prominent in the interpretation of the Scene, but is not mentioned explicitly anywhere in the text. We mark such entities as Implicit Units. Implicit units are identical to terminals, except that they do not correspond to a stretch of text. For example, “playing games is fun” has an implicit A which corresponds to the people playing the game. UCCA annotates inter-Scene relations (linkage) and, following Basic Linguistic Theory, distinguishes between three major types of linkage. First, a Scene can be an A in another Scene. For instance, in “John said he must leave”, “he must leave” is an A inside the Scene evoked by “said”. Second, a Scene may be an E of an entity in another Scene. For instance, in “the film we saw yesterday was wonderful”, “film we saw yesterday” is a Scene that serves as an E of “film”, which is both an A in the Scene and the Center of an A in the a flat structure. In general, the coarse-grained foundational layer does not try to resolve fine scope issues. John A kicked P his E ball C A (a) John C and N Mary C A bought P a E sofa C A together D (b) the film A we A saw P yesterday D E A was F wonderful C S E C (c) Figure 2: Examples of UCCA annotation graphs. Scene evoked by “wonderful” (see Figure 2(c)). A third type of linkage covers all other cases, e.g., temporal, causal and conditional inter-Scene relations. The linked Scenes in such cases are marked as Parallel Scenes (H). The units specifying the relation between Hs are marked as Linkers (L)6. As with other relations in UCCA, Linkers and the Scenes they link are bound by a unit. Unlike common practice in grammatical annotation, linkage relations in UCCA can cross sentence boundaries, as can relations represented in other layers (e.g., coreference). UCCA therefore annotates texts comprised of several paragraphs and not individual sentences (see Section 3). Example sentences. Following are complete annotations of two abbreviated example sentences from our corpus (see Section 3). “Golf became a passion for his oldest daughter: she took daily lessons and became very good, reaching the Connecticut Golf Championship.” This sentence contains four Scenes, evoked by “became a passion”, “took daily lessons”, “became very good” and “reaching”. The individual Scenes are annotated as follows: 1. “GolfA [becameE aE passionC]P [forR hisE oldestE daughterC]A” 6It is equally plausible to include Linkers for the other two linkage types. This is not included in the current layer. 231 2. “sheA [tookF [dailyE lessonsC]C]P ” 3. “sheA ... [becameE [veryE goodC]C]S” 4. “sheA ... reachingP [theE ConnecticutE GolfE ChampionshipC ]A” There is only one explicit Linker in this sentence (“and”), which links Scenes (2) and (3). None of the Scenes is an A or an E in the other, and they are therefore all marked as Parallel Scenes. We also note that in the case of the light verb construction “took lessons” and the copula clauses “became good” and “became a passion”, the verb is not the Center of the main relation, but rather the following noun or adjective. We also note that the unit “she” is an A in Scenes (2), (3) and (4). We turn to our second example: “Cukor encouraged the studio to accept her demands.” This sentence contains three Scenes, evoked by “encouraged”, “accept” and “demands”: 1. CukorA encouragedP [theE studioC]A [toR [accept her demands]C ]A 2. [the studio]A ... acceptP [her demands]A 3. herA demandsP IMPA Scenes (2) and (3) act as Participants in Scenes (1) and (2) respectively. In Scene (2), there is an implicit Participant which corresponds to whatever was demanded. Note that “her demands” is a Scene, despite being a noun phrase. 2.3 UCCA’s Multi-layered Structure Additional layers may refine existing relations or otherwise annotate a complementary set of distinctions. For instance, a refinement layer can categorize linkage relations according to their semantic types (e.g., temporal, purposive, causal) or provide tense distinctions for verbs. Another immediate extension to UCCA’s foundational layer can be the annotation of coreference relations. Recall the example “John kicked his ball”. A coreference layer would annotate a relation between “John” and “his” by introducing a new node whose descendants are these two units. The fact that this node represents a coreference relation would be represented by a label on the edge connecting them to the coreference node. There are three common ways to extend an annotation graph. First, by adding a relation that relates previously established units. This is done by introducing a new node whose descendants are the related units. Second, by adding an intermediate Passage # 1 2 3 4 5 6 # Sents. 8 20 23 14 13 15 # Tokens 259 360 343 322 316 393 ITA 67.3 74.1 71.2 73.5 77.8 81.1 Vs. Gold 72.4 76.7 75.5 75.7 79.5 84.2 Correction 93.7 Table 2: The upper part of the table presents the number of sentences and the number of tokens in the first passages used for the annotator training. The middle part presents the average F-scores obtained by the annotators throughout these passages. The first row presents the average F-score when comparing the annotations of the different annotators among themselves. The second row presents the average F-score when comparing them to a “gold standard”. The bottom row shows the average F-score between an annotated passage of a trained annotator and its manual correction by an expert. It is higher due to conforming analyses (see text). All F-scores are in percents. unit between a parent unit and some of its subunits. For instance, consider “he replied foolishly” and “he foolishly replied”. A layer focusing on Adverbial scope may refine the flat Scene structure assigned by the foundational layer, expressing the scope of “foolishly” over the relation “replied” in the first case, and over the entire Scene in the second. Third, by adding sub-units to a terminal. For instance, consider “gave up”, an expression which the foundational layer considers atomic. A layer that annotates tense can break the expression into “gave” and “up”, in order to annotate “gave” as the tense-bearing unit. Although a more complete discussion of the formalism is beyond the scope of this paper, we note that the formalism is designed to allow different annotation layers to be defined and annotated independently of one another, in order to facilitate UCCA’s construction through a community effort. 3 A UCCA-Annotated Corpus The annotated text is mostly based on English Wikipedia articles for celebrities. We have chosen this genre as it is an inclusive and diverse domain, which is still accessible to annotators from varied backgrounds. For the annotation process, we designed and implemented a web application tailored for UCCA’s annotation. A sample of the corpus containing roughly 5K tokens, as well as the annotation application can be found in our website7. UCCA’s annotations are not confined to a single sentence. The annotation is therefore carried out in passages of 300-400 tokens. After its an7www.cs.huji.ac.il/˜omria01 232 notation, a passage is manually corrected before being inserted into the repository. The section of the corpus annotated thus far contains 56890 tokens in 148 annotated passages (average length of 385 tokens). Each passage contains 450 units on average and 42.2 Scenes. Each Scene contains an average of 2 Participants and 0.3 Adverbials. 15% of the Scenes are static (contain an S as the main relation) and the rest are dynamic (containing a P). The average number of tokens in a Scene (excluding punctuation) is 10.7. 18.3% of the Scenes are Participants in another Scene, 11.4% are Elaborator Scenes and the remaining are Parallel Scenes. A passage contains an average of 11.2 Linkers. Inter-annotator agreement. We employ 4 annotators with varying levels of background in linguistics. Two of the annotators have no background in linguistics, one took an introductory course and one holds a Bachelor’s degree in linguistics. The training process of the annotators lasted 30–40 hours, which includes the time required for them to get acquainted with the web application. As this was the first large-scale trial with the UCCA scheme, some modifications to the scheme were made during the annotator’s training. We therefore expect the training process to be even faster in later distributions. There is no standard evaluation measure for comparing two grammatical annotations in the form of labeled DAGs. We therefore converted UCCA to constituency trees8 and, following standard practice, computed the number of brackets in both trees that match in both span and label. We derive an F-score from these counts. Table 2 presents the inter-annotator agreement in the training phase. The four annotators were given the same passage in each of these cases. In addition, a “gold standard” was annotated by the authors of this paper. The table presents the average F-score between the annotators, as well as the average F-score when comparing to the gold standard. Results show that although it represents complex hierarchical structures, the UCCA scheme is learned quickly and effectively. We also examined the influence of prior linguistic background on the results. In the first passage there was a substantial advantage to the annotators 8In cases a unit had multiple parents, we discarded all but one of its incoming edges. This resulted in discarding 1.9% of the edges. We applied a simple normalization procedure to the resulting trees. who had prior training in linguistics. The obtained F-scores when comparing to a gold standard, ordered decreasingly according to the annotator’s acquaintance with linguistics, were 78%, 74.4%, 69.5% and 67.8%. However, this performance gap quickly vanished. Indeed, the obtained F-scores, again compared to a gold standard and averaged over the next five training passages, were (by the same order) 78.6%, 77.3%, 79.2% and 78%. This is an advantage of UCCA over other syntactic annotation schemes that normally require highly proficient annotators. For instance, both the PTB and the Prague Dependency Treebank (B¨ohmov´a et al., 2003) employed annotators with extensive linguistic background. Similar findings to ours were reported in the PropBank project, which successfully employed annotators with various levels of linguistic background. We view this as a major advantage of semantic annotation schemes over their syntactic counterparts, especially given the huge amount of manual labor required for large syntactic annotation projects. The UCCA interface allows for multiple noncontradictory (“conforming”) analyses of a stretch of text. It assumes that in some cases there is more than one acceptable option, each highlighting a different aspect of meaning of the analyzed utterance (see below). This makes the computation of inter-annotator agreement fairly difficult. It also suggests that the above evaluation is excessively strict, as it does not take into account such conforming analyses. To address this issue, we conducted another experiment where an expert annotator corrected the produced annotations. Comparing the corrected versions to the originals, we found that F-scores are typically in the range of 90%–95%. An average taken over a sample of passages annotated by all four annotators yielded an F-score of 93.7%. It is difficult to compare the above results to the inter-annotator agreement of other projects for two reasons. First, many existing schemes are based on other annotation schemes or heavily rely on automatic tools for providing partial annotations. Second, some of the most prominent annotation projects do not provide reliable inter-annotator agreement scores (Artstein and Poesio, 2008). A recent work that did report inter-annotator agreement in terms of bracketing F-score is an annotation project of the PTB’s noun phrases with more elaborate syntactic structure (Vadas and Cur233 ran, 2011). They report an agreement of 88.3% in a scenario where their two annotators worked separately. Note that this task is much more limited in scope than UCCA (annotates noun phrases instead of complete passages in UCCA; uses 2 categories instead of 12 in UCCA). Nevertheless, the obtained inter-annotator agreement is comparable. Disagreement examples. Here we discuss two major types of disagreements that recurred in the training process. The first is the distinction between Elaborators and Centers. In most cases this distinction is straightforward, particularly where one sub-unit determines the semantic type of the parent unit, while its siblings add more information to it (e.g., “truckE companyC” is a type of a company and not of a truck). Some structures do not nicely fall into this pattern. One such case is with apposition. In the example “the Fox drama Glory days”, both “the Fox drama” and “Glory days” are reasonable candidates for being a Center, which results in disagreements. Another case is the distinction between Scenes and non-Scene relations. Consider the example “[John’s portrayal of the character] has been described as ...”. The sentence obviously contains two scenes, one in which John portrays a character and another where someone describes John’s doings. Its internal structure is therefore “John’sA portrayalP [of the character]A”. However, the syntactic structure of this unit leads annotators at times into analyzing the subject as a non-Scene relation whose C is “portrayal”. Static relations tend to be more ambiguous between a Scene and a non-Scene interpretation. Consider “Jane Smith (n´ee Ross)”. It is not at all clear whether “n´ee Ross” should be annotated as a Scene or not. Even if we do assume it is a Scene, it is not clear whether the Scene it evokes is her Scene of birth, which is dynamic, or a static Scene which can be paraphrased as “originally named Ross”. This leads to several conforming analyses, each expressing a somewhat different conceptualization of the Scene. This central notion will be more elaborately addressed in future work. We note that all of these disagreements can be easily resolved by introducing an additional layer focusing on the construction in question. 4 UCCA’s Benefits to Semantic Tasks UCCA’s relative insensitivity to syntactic forms has potential benefits for a wide variety of semantic tasks. This section briefly demonstrates these benefits through a number of examples. Recall the example “John took a shower” (Section 1). UCCA annotates the sentence as a single Scene, with a single Participant and a processual main relation: “JohnA [tookF [aE showerC]C ]P ”. The paraphrase “John showered” is annotated similarly: “JohnA showeredP ”. The structure is also preserved under translation to other languages, such as German (“JohnA duschteP ”, where “duschte” is a verb), or Portuguese “JohnA [tomouF banhoC]P ” (literally, John took shower). In all of these cases, UCCA annotates the example as a Scene with an A and a P, whose Center is a word expressing the notion of showering. Another example is the sentence “John does not have any money”. The foundational layer of UCCA annotates negation units as Ds, which yields the annotation “JohnA [doesF ]S- notD [haveC]-S [anyE moneyC]A” (where “does ... have” is a discontiguous unit)9. This sentence can be paraphrased as “JohnA hasP noD moneyA”. UCCA reflects the similarity of these two sentences, as it annotates both cases as a single Scene which has two Participants and a negation. A syntactic scheme would normally annotate “no” in the second sentence as a modifier of “money”, and “not” as a negation of “have”. The value of UCCA’s annotation can again be seen in translation to languages that have only one of these forms. For instance, the German translation of this sentence, “JohnA hatS keinD GeldA”, is a literal translation of “John has no money”. The Hebrew translation of this sentence is “eyn le john kesef” (literally, “there-is-no to John money”). The main relation here is therefore “eyn” (thereis-no) which will be annotated as S. This yields the annotation “eynS [leR JohnC]A kesefA”. The UCCA annotation in all of these cases is composed of two Participants and a State. In English and German, the negative polarity unit is represented as a D. The negative polarity of the Hebrew “eyn” is represented in a more detailed layer. As a third example, consider the two sentences “There are children playing in the park” and “Children are playing in the park”. The two sentences have a similar meaning but substantially different syntactic structures. The first contains two clauses, an existential main clause (headed by “there are”) 9The foundational layer places “not” in the Scene level to avoid resolving fine scope issues (see Section 2) 234 and a subordinate clause (“playing in the park”). The second contains a simple clause headed by “playing”. While the parse trees of these sentences are very different, their UCCA annotation in the foundational layer differ only in terms of Function units: “ChildrenA [areF playingC]P [inR theE parkC]A” and “ThereF areF childrenA [playing]P [inR theE parkC]A”10. Aside from machine translation, a great variety of semantic tasks can benefit from a scheme that is relatively insensitive to syntactic variation. Examples include text simplification (e.g., for second language teaching) (Siddharthan, 2006), paraphrase detection (Dolan et al., 2004), summarization (Knight and Marcu, 2000), and question answering (Wang et al., 2007). 5 Related Work In this section we compare UCCA to some of the major approaches to grammatical representation in NLP. We focus on English, which is the most studied language and the focus of this paper. Syntactic annotation schemes come in many forms, from lexical categories such as POS tags to intricate hierarchical structures. Some formalisms focus particularly on syntactic distinctions, while others model the syntax-semantics interface as well (Kaplan and Bresnan, 1981; Pollard and Sag, 1994; Joshi and Schabes, 1997; Steedman, 2001; Sag, 2010, inter alia). UCCA diverges from these approaches in aiming to abstract away from specific syntactic forms and to only represent semantic distinctions. Put differently, UCCA advocates an approach that treats syntax as a hidden layer when learning the mapping between form and meaning, while existing syntactic approaches aim to model it manually and explicitly. UCCA does not build on any other annotation layers and therefore implicitly assumes that semantic annotation can be learned directly. Recent work suggests that indeed structured prediction methods have reached sufficient maturity to allow direct learning of semantic distinctions. Examples include Naradowsky et al. (2012) for semantic role labeling and Kwiatkowski et al. (2010) for semantic parsing to logical forms. While structured prediction for the task of predicting tree structures is already well established (e.g., (Suzuki et al., 10The two sentences are somewhat different in terms of their information structure (Van Valin Jr., 2005), which is represented in a more detailed UCCA layer. 2009)), recent work has also successfully tackled the task of predicting semantic structures in the form of DAGs (Jones et al., 2012). The most prominent annotation scheme in NLP for English syntax is the Penn Treebank. Many syntactic schemes are built or derived from it. An increasingly popular alternative to the PTB are dependency structures, which are usually represented as trees whose nodes are the words of the sentence (Ivanova et al., 2012). Such representations are limited due to their inability to naturally represent constructions that have more than one head, or in which the identity of the head is not clear. They also face difficulties in representing units that participate in multiple relations. UCCA proposes a different formalism that addresses these problems by introducing a new node for every relation (cf. (Sangati and Mazza, 2009)). Several annotated corpora offer a joint syntactic and semantic representation. Examples include the Groningen Meaning bank (Basile et al., 2012), Treebank Semantics (Butler and Yoshimoto, 2012) and the Lingo Redwoods treebank (Oepen et al., 2004). UCCA diverges from these projects in aiming to abstract away from syntactic variation, and is therefore less coupled with a specific syntactic theory. A different strand of work addresses the construction of an interlingual representation, often with a motivation of applying it to machine translation. Examples include the UNL project (Uchida and Zhu, 2001), the IAMTC project (Dorr et al., 2010) and the AMR project (Banarescu et al., 2012). These projects share with UCCA their emphasis on cross-linguistically valid annotations, but diverge from UCCA in three important respects. First, UCCA emphasizes the notion of a multi-layer structure where the basic layers are maximally coarse-grained, in contrast to the above works that use far more elaborate representations. Second, from a theoretical point of view, UCCA differs from these works in aiming to represent conceptual semantics, building on works in Cognitive Linguistics (e.g., (Langacker, 2008)). Third, unlike interlingua that generally define abstract representations that may correspond to several different texts, UCCA incorporates the text into its structure, thereby facilitating learning. Semantic role labeling (SRL) schemes bear similarity to the foundational layer, due to their focus on argument structure. The leading SRL ap235 proaches are PropBank (Palmer et al., 2005) and NomBank (Meyers et al., 2004) on the one hand, and FrameNet (Baker et al., 1998) on the other. At this point, all these schemes provide a more finegrained set of categories than UCCA. PropBank and NomBank are built on top of the PTB annotation, and provide for each verb (PropBank) and noun (NomBank), a delineation of their arguments and their categorization into semantic roles. Their structures therefore follow the syntax of English quite closely. UCCA is generally less tailored to the syntax of English (e.g., see secondary verbs (Dixon, 2005)). Furthermore, PropBank and NomBank do not annotate the internal structure of their arguments. Indeed, the construction of the commonly used semantic dependencies derived from these schemes (Surdeanu et al., 2008) required a set of syntactic head percolation rules to be used. These rules are somewhat arbitrary (Schwartz et al., 2011), do not support multiple heads, and often reflect syntactic rather than semantic considerations (e.g., “millions” is the head of “millions of dollars”, while “dollars” is the head of “five million dollars”). Another difference is that PropBank and NomBank each annotate only a subset of predicates, while UCCA is more inclusive. This difference is most apparent in cases where a single complex predicate contains both nominal and verbal components (e.g., “limit access”, “take a shower”). In addition, neither PropBank nor Nomabnk address copula clauses, despite their frequency. Finally, unlike PropBank and NomBank, UCCA’s foundational layer annotates linkage relations. In order to quantify the similarity between UCCA and PropBank, we annotated 30 sentences from the PropBank corpus with their UCCA annotations and converted the outcome to PropBankstyle annotations11. We obtained an unlabeled F-score of 89.4% when comparing to PropBank, which indicates that PropBank-style annotations are generally derivable from UCCA’s. The disagreement between the schemes reflects both annotation conventions and principle differences, some of which were discussed above. The FrameNet project (Baker et al., 1998) 11The experiment was conducted on the first 30 sentences of section 02. The identity of the predicates was determined according to the PropBank annotation. We applied a simple conversion procedure that uses half a dozen rules that are not conditioned on any lexical item. We used a strict evaluation that requires an exact match in the argument’s boundaries. proposes a comprehensive approach to semantic roles. It defines a lexical database of Frames, each containing a set of possible frame elements and their semantic roles. It bears similarity to UCCA both in its use of Frames, which are a contextindependent abstraction of UCCA’s Scenes, and in its emphasis on semantic rather than distributional considerations. However, despite these similarities, FrameNet focuses on constructing a lexical resource covering specific cases of interest, and does not provide a fully annotated corpus of naturally occurring text. UCCA’s foundational layer can be seen as a complementary effort to FrameNet, as it focuses on high-coverage, coarsegrained annotation, while FrameNet is more finegrained at the expense of coverage. 6 Conclusion This paper presented Universal Conceptual Cognitive Annotation (UCCA), a novel framework for semantic representation. We described the foundational layer of UCCA and the compilation of a UCCA-annotated corpus. We demonstrated UCCA’s relative insensitivity to paraphrases and cross-linguistic syntactic variation. We also discussed UCCA’s accessibility to annotators with no background in linguistics, which can alleviate the almost prohibitive annotation costs of large syntactic annotation projects. UCCA’s representation is guided by conceptual notions and has its roots in the Cognitive Linguistics tradition and specifically in Cognitive Grammar (Langacker, 2008). These theories represent the meaning of an utterance according to the mental representations it evokes and not according to its reference in the world. Future work will explore options to further reduce manual annotation, possibly by combining texts with visual inputs during training. We are currently attempting to construct a parser for UCCA and to apply it to several semantic tasks, notably English-French machine translation. Future work will also discuss UCCA’s portability across domains. We intend to show that UCCA, which is less sensitive to the idiosyncrasies of a specific domain, can be easily adapted to highly dynamic domains such as social media. Acknowledgements. We would like to thank Tomer Eshet for partnering in the development of the web application and to Amit Beka for his help with UCCA’s software and development set. 236 References Omri Abend and Ari Rappoport. 2013. UCCA: A semantics-based grammatical annotation scheme. In IWCS ’13, pages 1–12. Ron Artstein and Massimo Poesio. 2008. Inter-coder agreement for computational linguistics. Computational Linguistics, 34(4):555–596. Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The Berkeley Framenet project. In ACLCOLING ’98, pages 86–90. Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2012. Abstract meaning representation (AMR) 1.0 specification. http://www.isi.edu/natural-language/people/amrguidelines-10-31-12.pdf. Valerio Basile, Johan Bos, Kilian Evang, and Noortje Venhuizen. 2012. Developing a large semantically annotated corpus. In LREC ’12, pages 3196–3200. Alena B¨ohmov´a, Jan Hajiˇc, Eva Hajiˇcov´a, and Barbora Hladk´a. 2003. The Prague Dependency Treebank. Treebanks, pages 103–127. Alistair Butler and Kei Yoshimoto. 2012. Banking meaning representations from treebanks. Linguistic Issues in Language Technology, 7(1). Alexander Clark and Shalom Lappin. 2010. Linguistic Nativism and the Poverty of the Stimulus. WileyBlackwell. Robert M. W. Dixon. 2005. A Semantic Approach To English Grammar. Oxford University Press. Robert M. W. Dixon. 2010a. Basic Linguistic Theory: Methodology, volume 1. Oxford University Press. Robert M. W. Dixon. 2010b. Basic Linguistic Theory: Grammatical Topics, volume 2. Oxford University Press. Robert M. W. Dixon. 2012. Basic Linguistic Theory: Further Grammatical Topics, volume 3. Oxford University Press. Bill Dolan, Chris Quirk, and Chris Brockett. 2004. Unsupervised construction of large paraphrase corpora: Exploiting massively parallel news sources. In COLING ’04, pages 350–356. Bonnie Dorr, Rebecca Passonneau, David Farwell, Rebecca Green, Nizar Habash, Stephen Helmreich, Edward Hovy, Lori Levin, Keith Miller, Teruko Mitamura, Owen Rambow, and Advaith Siddharthan. 2010. Interlingual annotation of parallel text corpora: A new framework for annotation and evaluation. Natural Language Engineering, 16(3):197– 243. Angelina Ivanova, Stephan Oepen, Lilja Øvrelid, and Dan Flickinger. 2012. Who did what to whom?: A contrastive study of syntacto-semantic dependencies. In LAW ’12, pages 2–11. Bevan Jones, Jacob Andreas, Daniel Bauer, Karl Moritz Hermann, and Kevin Knight. 2012. Semantics-based machine translation with hyperedge replacement grammars. In COLING ’12, pages 1359–1376. Aravind K. Joshi and Yves Schabes. 1997. Treeadjoining grammars. Handbook Of Formal Languages, 3:69–123. Ronald M. Kaplan and Joan Bresnan. 1981. LexicalFunctional Grammar: A Formal System For Grammatical Representation. Massachusetts Institute Of Technology, Center For Cognitive Science. Kevin Knight and Daniel Marcu. 2000. Statisticsbased summarization – step one: Sentence compression. In AAAI ’00, pages 703–710. Tom Kwiatkowski, Luke Zettlemoyer, Sharon Goldwater, and Mark Steedman. 2010. Inducing probabilistic CCG grammars from logical form with higherorder unification. In EMNLP ’10, pages 1223–1233. R.W. Langacker. 2008. Cognitive Grammar: A Basic Introduction. Oxford University Press, USA. Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313–330. Adam Meyers, Ruth Reeves, Catherine Macleod, Rachel Szekely, Veronika Zielinska, Brian Young, and Ralph Grishman. 2004. Annotating noun argument structure for Nombank. In LREC ’04, pages 803–806. Jason Naradowsky, Sebastian Riedel, and David Smith. 2012. Improving NLP through marginalization of hidden syntactic structure. In EMNLP ’12, pages 810–820. Stephan Oepen, Dan Flickinger, Kristina Toutanova, and Christopher D Manning. 2004. Lingo redwoods. Research on Language and Computation, 2(4):575–596. Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The proposition bank: An annotated corpus of semantic roles. Computational Linguistics, 31(1):145–159. Carl Pollard and Ivan A. Sag. 1994. Head-driven Phrase Structure Grammar. University Of Chicago Press. Ivan A Sag. 2010. Sign-based construction grammar: An informal synopsis. Sign-based Construction Grammar. CSLI Publications, Stanford, pages 39–170. 237 Federico Sangati and Chiara Mazza. 2009. An English dependency treebank `a la Tesni`ere. In TLT ’09, pages 173–184. Roy Schwartz, Omri Abend, Roi Reichart, and Ari Rappoport. 2011. Neutralizing linguistically problematic annotations in unsupervised dependency parsing evaluation. In ACL-HLT ’11, pages 663– 672. Advaith Siddharthan. 2006. Syntactic simplification and text cohesion. Research on Language & Computation, 4(1):77–109. Mark Steedman. 2001. The Syntactic Process. MIT Press. Mihai Surdeanu, Richard Johansson, Adam Meyers, Llu´ıs M`arquez, and Joakim Nivre. 2008. The CoNLL-2008 shared task on joint parsing of syntactic and semantic dependencies. In CoNLL ’08, pages 159–177. Jun Suzuki, Hideki Isozaki, Xavier Carreras, and Michael Collins. 2009. An empirical study of semisupervised structured conditional models for dependency parsing. In EMNLP ’09, pages 551–560. Hiroshi Uchida and Meiying Zhu. 2001. The universal networking language beyond machine translation. In International Symposium on Language in Cyberspace, pages 26–27. David Vadas and James R Curran. 2011. Parsing noun phrases in the Penn Treebank. Computational Linguistics, 37(4):753–809. Robert D. Van Valin Jr. 2005. Exploring The Syntaxsemantics Interface. Cambridge University Press. Mengqiu Wang, Noah A. Smith, and Teruko Mitamura. 2007. What is the Jeopardy model? A quasisynchronous grammar for QA. In EMNLP-CoNLL ’07, pages 22–32. 238
2013
23
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 239–249, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Linking Tweets to News: A Framework to Enrich Short Text Data in Social Media Weiwei Guo∗and Hao Li† and Heng Ji† and Mona Diab‡ ∗Department of Computer Science, Columbia University †Computer Science Department and Linguistic Department, Queens College and Graduate Center, City University of New York ‡Department of Computer Science, George Washington University [email protected], {haoli.qc,hengjicuny}@gmail.com, [email protected] Abstract Many current Natural Language Processing [NLP] techniques work well assuming a large context of text as input data. However they become ineffective when applied to short texts such as Twitter feeds. To overcome the issue, we want to find a related newswire document to a given tweet to provide contextual support for NLP tasks. This requires robust modeling and understanding of the semantics of short texts. The contribution of the paper is two-fold: 1. we introduce the Linking-Tweets-toNews task as well as a dataset of linked tweet-news pairs, which can benefit many NLP applications; 2. in contrast to previous research which focuses on lexical features within the short texts (text-to-word information), we propose a graph based latent variable model that models the inter short text correlations (text-to-text information). This is motivated by the observation that a tweet usually only covers one aspect of an event. We show that using tweet specific feature (hashtag) and news specific feature (named entities) as well as temporal constraints, we are able to extract text-to-text correlations, and thus completes the semantic picture of a short text. Our experiments show significant improvement of our new model over baselines with three evaluation metrics in the new task. 1 Introduction Recently there has been an increasing interest in language understanding of Twitter messages. Researchers (Speriosui et al., 2011; Brody and Diakopoulos, 2011; Jiang et al., 2011) were interested in sentiment analysis on Twitter feeds, and opinion mining towards political issues or politicians (Tumasjan et al., 2010; Conover et al., 2011). Others (Ramage et al., 2010; Jin et al., 2011) summarized tweets using topic models. Although these NLP techniques are mature, their performance on tweets inevitably degrades, due to the inherent sparsity in short texts. In the case of sentiment analysis, while people are able to achieve 87.5% accuracy (Maas et al., 2011) on a movie review dataset (Pang and Lee, 2004), the performance drops to 75% (Li et al., 2012) on a sentence level movie review dataset (Pang and Lee, 2005). The problem worsens when some existing NLP systems cannot produce any results given the short texts. Considering the following tweet: Pray for Mali... A typical event extraction/discovery system (Ji and Grishman, 2008) fails to discover the war event due to the lack of context information (Benson et al., 2011), and thus fails to shed light on the users focus/interests. To enable the NLP tools to better understand Twitter feeds, we propose the task of linking a tweet to a news article that is relevant to the tweet, thereby augmenting the context of the tweet. For example, we want to supplement the implicit context of the above tweet with a news article such as the following entitled: State of emergency declared in Mali where abundant evidence can be fed into an offthe-shelf event extraction/discovery system. To create a gold standard dataset, we download tweets spanning over 18 days, each with a url linking to a news article of CNN or NYTIMES, as well as all the news of CNN and NYTIMES published during the period. The goal is to predict the url referred news article based on the text in each tweet.1 We 1The data and code is publicly available at www.cs. 239 believe many NLP tasks will benefit from this task. In fact, in the topic modeling research, previous work (Jin et al., 2011) already showed that by incorporating webpages whose urls are contained in tweets, the tweet clustering purity score was boosted from 0.280 to 0.392. Given the few number of words in a tweet (14 words on average in our dataset), the traditional high dimensional surface word matching is lossy and fails to pinpoint the news article. This constitutes a classic short text semantics impediment (Agirre et al., 2012). Latent variable models are powerful by going beyond the surface word level and mapping short texts into a low dimensional dense vector (Socher et al., 2011; Guo and Diab, 2012b). Accordingly, we apply a latent variable model, namely, the Weighted Textual Matrix Factorization [WTMF] (Guo and Diab, 2012b; Guo and Diab, 2012c) to both the tweets and the news articles. WTMF is a state-of-the-art unsupervised model that was tested on two short text similarity datasets: (Li et al., 2006) and (Agirre et al., 2012), which outperforms Latent Semantic Analysis [LSA] (Landauer et al., 1998) and Latent Dirichelet Allocation [LDA] (Blei et al., 2003) by a large margin. We employ it as a strong baseline in this task as it exploits and effectively models the missing words in a tweet, in practice adding thousands of more features for the tweet, by contrast LDA, for example, only leverages observed words (14 features) to infer the latent vector for a tweet. Apart from the data sparseness, our dataset proposes another challenge: a tweet usually covers only one aspect of an event. In our previous example, the tweet only contains the location Mali while the event is about French army participated in Mali war. In this scenario, we would like to find the missing elements of the tweet such as French, war from other short texts, to complete the semantic picture of Pray in Mali tweet. One drawback of WTMF for our purposes is that it simply models the text-to-word information without leveraging the correlation between short texts. While this is acceptable on standard short text similarity datasets (data points are independently generated), it ignores some valuable information characteristically present in our dataset: (1) The tweet specific features such as hashtags. Hashtags prove to be a direct indication of the semantics of tweets (Ramage et al., 2010); (2) The news specific features columbia.edu/˜weiwei such as named entities in a document. Named entities acquired from a news document, typically with high accuracy using Named Entity Recognition [NER] tools, may be particularly informative. If two texts mention the same entities then they might describe the same event; (3) The temporal information in both genres (tweets and news articles). We note that there is a higher chance of event description overlap between two texts if their time of publication is similar. In this paper, we study the problem of mining and exploiting correlations between texts using these features. Two texts may be considered related or complementary if they share a hashtag/NE or satisfies the temporal constraints. Our proposed latent variable model not only models text-to-word information, but also is aware of the text-to-text information (illustrated in Figure 1): two linked texts should have similar latent vectors, accordingly the semantic picture of a tweet is completed by receiving semantics from its related tweets. We incorporate this additional information in the WTMF model. We also show the different impact of the text-to-text relations in the tweet genre and news genre. We are able to achieve significantly better results than with a text-to-words WTMF model. This work can be regarded as a short text modeling approach that extends previous work however with a focus on combining the mining of information within short texts coupled with utilizing extra shared information across the short texts. 2 Task and Data The task is given the text in a tweet, a system aims to find the most relevant news article. For gold standard data, we harvest all the tweets that have a single url link to a CNN or NYTIMES news article, dated from the 11th of Jan to the 27th of Jan, 2013. In evaluation, we consider this url-referred news article as the gold standard – the most relevant document for the tweet, and remove the url from the text of the tweet. We also collect all the news articles from both CNN and NYTIMES from RSS feeds during the same timeframe. Each tweet entry has the published time, author, text; each news entry contains published time, title, news summary, url. The tweet/news pairs are extracted by matching urls. We manually filtered “trivial” tweets where the tweet content is simply the news title or news summary. The final dataset results in 240 (a)   t1   t2   n2   n1   t3   w1   w2   w3   w4   w5   w6   w7   w8   t1   t2   n2   n1   t3   w1   w2   w3   w4   w5   w6   w7   w8   #healthcare   Obama   temporal   (b)   Figure 1: (a) WTMF. (b) WTMF-G: the tweet nodes t and news nodes n are connected by hashtags, named entities or temporal edges (for simplicity, the missing tokens are not shown in the figure) 34,888 tweets and 12,704 news articles. It is worth noting that the news corpus is not restricted to current events. It covers various genres and topics, such as travel guides. e.g. World’s most beautiful lakes, and health issues, e.g. The importance of a ‘stop day’, etc. 2.1 Evaluation metric For our task evaluation, ideally, we would like the system to be able to identify the news article specifically referred to by the url within each tweet in the gold standard. However, this is very difficult given the large number of potential candidates, especially those with slight variations. Therefore, following the Concept Definition Retrieval task in (Guo and Diab, 2012b) and (Steck, 2010) we use a metric for evaluating the ranking of the correct news article to evaluate the systems, namely, ATOPt, area under the TOPKt(k) recall curve for a tweet t. Basically, it is the normalized ranking ∈[0, 1] of the correct news article among all candidate news articles: ATOPt = 1 means the url-referred news article has the highest similarity value with the tweet (a correct NARU); ATOPt = 0.95 means the similarity value with correct news article is larger than 95% of the candidates, i.e. within the top 5% of the candidates. ATOPt is calculated as follows: ATOPt = Z 1 0 TOPKt(k)dk (1) where TOPKt(k) = 1 if the url referred news article is in the “top k” list, otherwise TOPKt(k) = 0. Here k ∈[0, 1] is the relative position (when k = 1, it means all the candidates). We also include other metrics to examine if the system is able to rank the url referred news article in the first few returned results: TOP10 recall hit rate to evaluate whether the correct news is in the top 10 results, and RR, Reciprocal Rank= 1/r (i.e., RR= 1/3 when the correct news article is ranked at the 3rd highest place). 3 Weighted Textual Matrix Factorization The WTMF model (Guo and Diab, 2012a) has been successfully applied to the short text similarity task, achieving state-of-the-art unsupervised performance. This can be attributed to the fact that it models the missing tokens as features, thereby adding many more features for a short text. The missing words of a sentence are defined as all the vocabulary of the training corpus minus the observed words in a sentence. Missing words serve as negative examples for the semantics of a short text: the short text should not be related to the missing words. As per (Guo and Diab, 2012b), the corpus is represented in a matrix X, where each cell stores the TF-IDF values of words. The rows of X are words and columns are short texts. As in Figure 2, matrix X is approximated by the product of a K ×M matrix P and a K ×N matrix Q. Accordingly, each sentence sj is represented by a K dimensional latent vector Q·,j. Similarly a word wi is generalized by P·,i. Therefore, the inner product of a word vector P·,i and a short text vector Q·,j is to approximate the cell Xij (shaded part in Figure 2). In this way, the missing words are modeled by requiring the inner product of a word vector and short text vector to be close to 0 (the word and the short text should be irrelevant). Since 99% cells in X are missing tokens (0 value), the impact of observed words is significantly diminished. Therefore a small weight wm is assigned for each 0 cell (missing tokens) in the matrix X in order to preserve the influence of observed words. P and Q are optimized by minimize the objective function: 241 Figure 2: Weighted Textual Matrix Factorization X i X j Wij (P·,i · Q·,j −Xij)2 + λ||P||2 2 + λ||Q||2 2 Wi,j =  1, if Xij ̸= 0 wm, if Xij = 0 (2) where λ is a regularization term. 4 Creating Text-to-text Relations via Twitter/News Features WTMF exploits the text-to-word information in a very nuanced way, while the dependency between texts is ignored. In this Section, we introduce how to create text-to-text relations. 4.1 Hashtags and Named Entities Hashtags highlight the topics in tweets, e.g., The #flu season has started. We believe two tweets sharing the same hashtag should be related, hence we place a link between them to explicitly inform the model that these two tweets should be similar. We find only 8,701 tweets out of 34,888 include hashtags. In fact, we observe many hashtag words are mentioned in tweets without explicitly being tagged with #. To overcome the hashtag sparseness issue, one can resort to keywords recommendation algorithms to mine hashtags for the tweets (Yang et al., 2012). In this paper, we adopt a simple but effective approach: we collect all the hashtags in the dataset, and automatically hashtag any word in a tweet if that word appears hashtagged in any other tweets. This process resulted in 33,242 tweets automatically labeled with hashtags. For each tweet, and for each hashtag it contains, we extract k tweets that contain this hashtag, assuming they are complementary to the target tweet, and link the k tweets to the target tweet. If there are more than k tweets found, we choose the top k ones that are most chronologically close to the target tweet. The statistics of links can be found in table 2. Named entities are some of the most salient features in a news article. Directly applying Named Entity Recognition (NER) tools on news titles or tweets results in many errors (Liu et al., 2011) due to the noise in the data, such as slang and capitalization. Accordingly, we first apply the NER tool on news summaries, then label named entities in the tweets in the same way as labeling the hashtags: if there is a string in the tweet that matches a named entity from the summaries, then it is labeled as a named entity in the tweet. 25,132 tweets are assigned at least one named entity.2 To create the similar tweet set, we find k tweets that also contain the named entity. 4.2 Temporal Relations Tweets published in the same time interval have a larger chance of being similar than those are not chronologically close (Wang and McCallum, 2006). However, we cannot simply assume any two tweets are similar only based on the timestamp. Therefore, for a tweet we link it to the k most similar tweets whose published time is within 24 hours of the target tweet’s timestamp. We use the similarity score returned by WTMF model to measure the similarity of two tweets. We experimented with other features such as authorship. We note that it was not a helpful feature. While authorship information helps in the task of news/tweets recommendation for a user (Corso et al., 2005; Yan et al., 2012), the authorship information is too general for this task where we target on “recommending” a news article for a tweet. 4.3 Creating Relations on News We extract the 3 subgraphs (based on hashtags, named entities and temporal) on news articles. However, automatically tagging hashtags or named entities leads to much worse performance (around 93% ATOP values, a 3% decrease from baseline WTMF). There are several reasons for this: 1. When a hashtag-matched word appears in a tweet, it is often related to the central meaning of the tweet, however news articles are generally much longer than tweets, resulting in many more hashtags/named entities matches even though these named entities may not be closely related. 2. The noise introduced during automatic NER accumulates much faster given the large number of named entities in news data. Therefore we only extract temporal relations for news articles. 2Note that there are some false positive named entities detected such as apple. We plan to address removing noisy named entities and hashtags in future work 242 5 WTMF on Graphs We propose a novel model to incorporate the links generated as described in the previous section. If two texts are connected by a link, it means they should be semantically similar. In the WTMF model, we would like the latent vectors of two text nodes Q·,j1, Q·,j2 to be as similar as possible, namely that their cosine similarity to be close to 1. To implement this, we add a regularization term in the objective function of WTMF (equation 2) for each linked pairs Q·,j1, Q·,j2: δ · ( Q·,j1 · Q·,j2 |Q·,j1||Q·,j2| −1)2 (3) where |Q·,j| denotes the length of vector Q·,j. The coefficient δ denotes the importance of the text-totext links. A larger δ means we put more weights on the text-to-text links and less on the text-toword links. We refer to this model as WTMF-G (WTMF on graphs). 5.1 Inference Alternating Least Square [ALS] is used for inference in weighted matrix factorization (Srebro and Jaakkola, 2003). However, ALS is no longer applicable here with the new regularization term (equation 3) involving the length of text vectors |Q·,j|, which is not in quadratic form. Therefore we approximate the objective function by treating the vector length |Q·,j| as fixed values during the ALS iterations: P·,i =  Q ˜ W (i)Q⊤+ λI −1 Q ˜ W (i)X·,i Q·,j =  P ˜ W (j)P ⊤+ λI + δL2 (j)Q·,s(j)diag(L2 (s(j)))Q⊤ ·,s(j) −1  P ˜ W (j)X⊤ j,· + δL(j)Q·,s(j)Ln(j)  (4) We define n(j) as the linked neighbors of short text j, and Q·,n(j) as the set of latent vectors of j’s neighbors. The reciprocal of length of these vectors in the current iteration are stored in Ls(j). Similarly, the reciprocal of the length of the short text vector Q·,j is Lj. ˜W (i) = diag(W·,i) is an M × M diagonal matrix containing the ith row of weight matrix W. Due to limited space, the details of the optimization are not shown in this paper; they can be found in (Steck, 2010). 6 Experiments 6.1 Experiment Setting Corpora: We use the same corpora as in (Guo and Diab, 2012b): Brown corpus (each sentence is treated as a document), sense definitions of Wiktionary and Wordnet (Fellbaum, 1998). The tweets and news articles are also included in the corpus, generating 441,258 short texts and 5,149,122 words. The data is tokenized, POS-tagged by Stanford POS tagger (Toutanova et al., 2003), and lemmatized by WordNet::QueryData.pm. The value of each word in matrix X is its TF-IDF value in the short text. Baselines: We present 4 baselines: 1. Information Retrieval model [IR], which simply treats a tweet as a document, and performs traditional surface word matching. 2. LDA-θ with Gibbs Sampling as inference method. We use the inferred topic distribution θ as a latent vector to represent the tweet/news. 3. LDA-wvec. The problem with LDA-θ is the inferred topic distribution latent vector is very sparse with only a few non-zero values, resulting in many tweet/news pairs receiving a high similarity value as long as they are in the same topic domain. Hence following (Guo and Diab, 2012b), we first compute the latent vector of a word by P(z|w) (topic distribution per word), then average the word latent vectors weighted by TF-IDF values to represent the short text, which yields much better results. 4. WTMF. In these baselines, hashtags and named entities are simply treated as words. To curtail variation in results due to randomness, each reported number is the average of 10 runs. For WTMF and WTMF-G, we assign the same initial random values and run 20 iterations. In both systems we fix the missing words weight as wm = 0.01 and regularization coefficient at λ = 20, which is the best condition of WTMF found in (Guo and Diab, 2012b; Guo and Diab, 2012c). For LDA-θ and LDA-wvec, we run Gibbs Sampling based LDA for 2000 iterations and average the model over the last 10 iterations. Evaluation: The similarity between a tweet and a news article is measured by cosine similarity. A news article is represented as the concatenation of its title and its summary, which yields better performance.3 As in (Guo and Diab, 2012b), for each tweet, we collect the 1,000 news articles published prior to the tweet whose dates of publication are closest to that of the tweet. 4 The cosine similarity 3While these are separated, WTMF receive ATOP 95.558% for representing news article as titles and 94.385% for representing news article as summaries 4Ideally we want to include all the news articles published 243 Models Parameters ATOP TOP10 RR dev test dev test dev test IR 90.795% 90.743% 73.478% 74.103% 46.024% 46.281% LDA-θ α = 0.05, β = 0.05 81.368% 81.251% 32.328% 31.207% 13.134% 12.469% LDA-wvec α = 0.05, β = 0.05 94.148% 94.196% 53.500% 53.952% 28.743% 27.904% WTMF 95.964% 96.092% 75.327% 76.411% 45.310% 46.270% WTMF-G k = 3, δ = 3 96.450% 96.543% 76.485% 77.479% 47.516% 48.665% WTMF-G k = 5, δ = 3 96.613% 96.701% 76.029% 77.176% 47.197% 48.189% WTMF-G k = 4, δ = 3 96.510% 96.610% 77.782% 77.782% 47.917% 48.997% Table 1: ATOP Performance (latent dimension D = 100 for LDA/WTMF/WTMF-G) 0 1 2 3 4 96 96.2 96.4 96.6 96.8 ATOP δ dev test (a) ATOP 0 1 2 3 4 75 75.5 76 76.5 77 77.5 78 TOP10 δ dev test (b) TOP10 0 1 2 3 4 45 46 47 48 49 RR δ dev test (c) RR Figure 3: Impact of δ (D = 100, k = 4) score between the url referred news article and the tweet is compared against the scores of these 1,000 news articles to calculate the metric scores. 1/10 of the tweet/news pairs are used as development set, based on which all the parameters are tuned. The metrics ATOP, TOP10 and RR are used to evaluate the performance of systems. 6.2 Results Table 1 summarizes the performance of the baselines and WTMF-G at latent dimension D = 100. All the parameters are chosen based on the development set. For WTMF-G, we try different values of k (the number of neighbors linked to a tweet/news for a hashtag/NE/time constraint) and δ (the weight of link information). We choose to model the links in four subgraphs: (a) hashtags in tweet; (b) named entities in tweet; (c) time in tweet; (d) time in news article. For LDA we tune the hyperparameter α (Dirichlet prior for topic distribution of a document) and β (Dirichlet prior for word distribution given a topic). It is worth noting that ATOP measures the overall ranking in 1000 samples while TOP10/RR focus on whether the aligned news article is in the first few returned results. Same as reported in (Guo and Diab, 2012b), LDA-θ has the worst results due to directly using prior to the tweet, however, that will give a bias to some tweets, since the latter tweets have a larger candidate set than the earlier ones the inferred topic distribution of a text θ. The inferred topic vector has only a few non-zero values, hence a lot of information is missing. LDA-wvec preserves more information by creating a dense latent vector from the topic distribution of a word P(z|w), and thus does much better in ATOP. It is interesting to see that IR model has a very low ATOP (90.795%) and an acceptable RR (46.281%) score, in contrast to LDA-wvec with a high ATOP (94.148%) and a low RR(27.904%) score. This is caused by the nature of the two models. LDA-wvec is able to identify global coarse grained topic information (such as politics vs. economics), hence receiving a high ATOP by excluding the most irrelevant news articles, however it does not distinguish fine grained difference such as Hillary vs. Obama. IR model exerts the opposite influence via word matching. It ranks a correct news article very high if overlapping words exist (leading to a high RR), or the news article is ranked very low if no overlapping words (hence a low ATOP). We can conclude WTMF is a very strong baseline given that it achieves high scores with three metrics. As a latent variable model, it is able to capture global topics (+1.89% ATOP over LDAwvec); moreover, by explicitly modeling missing words, the existence of a word is also encoded in the latent vector (+2.31% TOP10 and −0.011% RR over IR model). 244 50 75 100 125 150 95 95.5 96 96.5 97 ATOP D WTMF WTMF−G (a) ATOP 50 75 100 125 150 70 72 74 76 78 80 TOP10 D WTMF WTMF−G (b) TOP10 50 75 100 125 150 40 42 44 46 48 50 RR D WTMF WTMF−G (c) RR Figure 4: Impact of latent dimension D (k = 4) Conditions Links ATOP TOP10 RR dev test dev test dev test hashtags tweets 375,371 +0.397% +0.379% +1.015% +1.021% +0.504% +0.641% NE tweets 164,412 +0.141% +0.130% +0.598% +0.479% +0.278% +0.294% time tweet 139,488 +0.126% +0.136% +0.512% +0.503% +0.241% +0.327% time news 50,008 +0.036% +0.026% +0.156% +0.256% +1.890% +1.924% full model (all 4 subgraphs) 573,999 +0.546% +0.518% +1.556% +1.371% +2.607% +2.727% full model minus hashtags tweets 336,963 +0.288% +0.276% +1.129% +1.037% +2.488% +2.541% full model minus NE tweets 536,333 +0.528% +0.503% +1.518% +1.393% +2.580% +2.680% full model minus time tweet 466,207 +0.457% +0.426% +1.281% +1.145% +2.449% +2.554% full model minus time news 523,991 +0.508% +0.490% +1.300% +1.190% +0.632% +0.785% author tweet 21,318 +0.043% +0.042% +0.028% +0.057% −0.003% −0.017% full model plus author tweet 593,483 +0.575% +0.545% +1.465% +1.336% +2.415% +2.547% Table 2: Contribution of subgraphs when D = 100, k = 4, δ = 3 (gain over baseline WTMF) With WTMF being a very challenging baseline, WTMF-G can still significantly improve all 3 metrics. In the case k = 4, δ = 3 compared to WTMF, WTMF-G receives +1.371% TOP10, +2.727% RR, and +0.518% ATOP value (this is a significant improvement of ATOP value considering that it is averaged on 30,000 data points, at an already high level of 96% reducing error rate by 13%). All the improvement of WTMF-G over WTMF is statistically signicant at the 99% condence level with a two-tailed paired t-test. We also present results using different number of links k in WTMF-G in table 1. We experiment with k = {3, 4, 5}. k = 4 is found to be the optimal value (although k = 5 has a better ATOP). Figure 3 demonstrates the impact of δ = {0, 1, 2, 3, 4} on each metric when k = 4. Note when δ = 0 no link is used, which is the baseline WTMF. We can see using links is always helpful. When δ = 4, we receive a higher ATOP value but lower TOP10 and RR. Figure 4 illustrates the impact of dimension D = {50, 75, 100, 125, 150} on WTMF and WTMF-G (k = 4) on the test set. The trends hold in different D values with a consistent improvement. Generally a larger D leads to a better performance. In all conditions WTMF-G outperforms WTMF. 6.3 Contribution of Subgraphs We are interested in the contribution of each feature subgraph. Therefore we list the impact of individual components in table 2. The impact of each subgraph is evaluated in two conditions: (a) the subgraph-only; (b) the full-model-minus the subgraph. The full model is the combination of the 4 subgraphs (which is also the best model k = 4 in table 1). In the last two rows of table 2 we also present the results of using authorship only and the full model plus authorship. The 2nd column lists the number of links in the subgraph. To highlight the difference, we report the gain of each model over the baseline model WTMF. We have several interesting observations from table 2. It is clear that the hashtag subgraph on tweets is the most useful subgraph: with hashtag tweet it has the best ATOP and TOP10 values among subgraph-only condition (ATOP: +0.379% vs. 2nd best +0.136%, TOP10: +1.021% vs. 2nd best +0.503%), while in the full-model-minus condition, minus hashtag has the lowest ATOP and TOP10. Observing that it also contains the most links, we believe the coverage is another important reason for the great performance. It seems the named entity subgraph helps the least. Looking into the extracted named entities and hashtags, we find many popular named enti245 ties are covered by hashtags. That said, adding named entity subgraph into final model has a positive contribution. It is worth noting that the time news subgraph has the most positive influence on RR. This is because temporal information is very salient in news domain: usually there are several reports to describe an event within a short period, therefore the news latent vector is strengthened by receiving semantics from its neighbors. At last, we analyze the influence of authorship of tweets. Adding authorship into the full model greatly hurts the scores of TOP10 and RR, whereas it is helpful to ATOP. This is understandable since by introducing author links between tweets, to some degree we are averaging the latent vectors of tweets written by the same person. Therefore, for a tweet whose topic is vague and hard to detect, it will get some prior knowledge of topics through the author links (hence increase ATOP), whereas this prior knowledge becomes noise for the tweets that are already handled very well by the model (hence decrease TOP10 and RR). 6.4 Error Analysis We look closely into ATOP results to obtain an intuitive feel for what is captured and what is not. For example, the ATOP score of WTMF for the tweet-news pair below is 89.9%: Tweet: ...stoked growing speculation that Pakistan’s powerful military was quietly supporting moves... @declanwalsh News: Pakistan Supreme Court Orders Arrest of Prime Minister By identifying “Pakistan” and “Supreme Court” as hashtags/named entity, WTMF-G is able to propagate the semantics from the following two informative tweets to the original tweet, hence achieving a higher ATOP score of 91.9%. #Pakistan Supreme Court orders the arrest of the PM on corruption charges. A discouraging sign from a tumultuous political system: Pakistan’s Supreme Court ordered the arrest of PM Ashraf today. Below is an example that shows the deficiency of both WTMF and WTMF-G: Tweet: Another reason to contemplate moving: an early death News: America flunks its health exam In this case WTMF and WTMF-G achieve a low ATOP of 69.8% and 75.1%, respectively. The only evidence the latent variable models rely on is lexical items (WTMF-G extract additional textto-text correlation by word matching). To pinpoint the url referred news articles, other advanced NLP features should be exploited. In this case, we believe sentiment information could be helpful – both tweet and the news article contain a negative polarity. 7 Related Work Short Text Semantics: The field of short text semantics has progressed immensely in recent years. Early work focus on word pair similarity in the high dimensional space. The word pair similarity is either knowledge based (Mihalcea et al., 2006; Tsatsaronis et al., 2010) or corpus based (Li et al., 2006; Islam and Inkpen, 2008), where cooccurrence information cannot be efficiently exploited. Guo and Diab (2012b; 2012a; 2013) show the superiority of the latent space approach in the WTMF model achieving state-of-the-art performance on two datasets. However, all of them only reply on text-to-word information. In this paper, we focus on modeling inter-text relations induced by Twitter/news features. We extend the WTMF model and adapt it into tweets modeling, achieving significantly better results. Modeling Tweets in a Latent Space: Ramage et al. (2010) also use hashtags to improve the latent representation of tweets in a LDA framework, Labeled-LDA (Ramage et al., 2009), treating each hashtag as a label. Similar to the experiments presented in this paper, the result of using LabeledLDA alone is worse than the IR model, due to the sparseness in the induced LDA latent vector. Jin et al. (2011) apply an LDA based model on clustering by incorporating url referred documents. The semantics of long documents are transferred to the topic distribution of tweets. News recommendation: A news recommendation system aims to recommend news articles to a user based on the features (e.g., key words, tags, category) in the documents that the user likes (hence these documents form a training set) (Claypool et al., 1999; Corso et al., 2005; Lee and Park, 2007). Our paper resembles it in searching for a related news article. However, we target on recommending news article only based on a tweet, which is a much smaller context than the set of favorite documents chosen by a user . 246 Research on Tweets: In (Duan et al., 2010), url availability is an important feature for tweets ranking. However, the number of tweets with an explicit url is very limited. Huang et al. (2012) propose a graph-based framework to propagate tweet ranking scores, where relevant web documents is found to be helpful to discover informative tweets. Both work can take advantage of our work to either extract potential url features or retrieve topically similar web documents. (Sankaranarayanan et al., 2009) aims at capturing tweets that correspond to late breaking news. However, they cluster tweets and simply choose a url referred news in those tweets as the related news for the whole cluster (the urls are visible to the systems). (Abel et al., 2011) is most related work to our paper, however their focus is the user profiling task, therefore they do not provide a paired tweet/news data set and have to conduct manual evaluation. 8 Conclusion We propose a Linking-Tweets-to-News task, which potentially benefits many NLP applications where off-the-shelf NLP tools can be applied to the most relevant news. We also collect a gold standard dataset by crawling tweets each with a url referring to a news article. We formalize the linking task as a short text modeling problem, and extract Twitter/news specific features to extract textto-text relations, which are incorporated into a latent variable model. We achieve significant improvement over baselines. Acknowledgements This work was supported by the U.S. Army Research Laboratory under Cooperative Agreement No. W911NF- 09-2-0053 (NS-CTA), the U.S. NSF CAREER Award under Grant IIS-0953149, the U.S. NSF EAGER Award under Grant No. IIS1144111, the U.S. DARPA FA8750-13-2-0041 Deep Exploration and Filtering of Text (DEFT) Program and CUNY Junior Faculty Award. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on. References Fabian Abel, Qi Gao, Geert-Jan Houben, and Ke Tao. 2011. Semantic enrichment of twitter posts for user profile construction on the social web. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics. Eneko Agirre, Daniel Cer, Mona Diab, and Aitor Gonzalez-Agirre. 2012. Semeval-2012 task 6: A pilot on semantic textual similarity. In First Joint Conference on Lexical and Computational Semantics (*SEM). Edward Benson, Aria Haghighi, and Regina Barzilay. 2011. Event discovery in social media feeds. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. Journal of Machine Learning Research, 3. Samuel Brody and Nicholas Diakopoulos. 2011. Cooooooooooooooollllllllllllll!!!!!!!!!!!!!! using word lengthening to detect sentiment in microblogs. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. Mark Claypool, Anuja Gokhale, Tim Miranda, Pavel Murnikov, Dmitry Netes, and Matthew Sartin. 1999. Combining content-based and collaborative filters in an online newspaper. In In Proceedings of the ACM SIGIR Workshop on Recommender Systems. Michael Conover, Jacob Ratkiewicz, Matthew Francisco, Bruno Gonc¸alves, Filippo Menczer, and Alessandro Flammini. 2011. Political polarization on twitter. In ICWSM. Gianna M. Del Corso, Antonio Gulli, and Francesco Romani. 2005. Ranking a stream of news. In WWW, pages 97–106. Yajuan Duan, Long Jiang, Tao Qin, Ming Zhou, and Heung-Yeung Shum. 2010. An empirical study on learning to rank of tweets. In COLING. Christiane Fellbaum. 1998. WordNet: An Electronic Lexical Database. MIT Press. Weiwei Guo and Mona Diab. 2012a. Learning the latent semantics of a concept by its definition. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics. Weiwei Guo and Mona Diab. 2012b. Modeling sentences in the latent space. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics. Weiwei Guo and Mona Diab. 2012c. Weiwei: A simple unsupervised latent semantics based approach for sentence similarity. In First Joint Conference on Lexical and Computational Semantics (*SEM). 247 Weiwei Guo and Mona Diab. 2013. Improving lexical semantics for sentential semantics: Modeling selectional preference and similar words in a latent variable model. In The 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Hongzhao Huang, Arkaitz Zubiaga, Heng Ji, Hongbo Deng, Dong Wang, Hieu Le, Tarek Abdelzather, Jiawei Han, Alice Leung, John Hancock, and Clare Voss. 2012. Tweet ranking based on heterogeneous networks. In Proceedings of the 24th International Conference on Computational Linguistics. Aminul Islam and Diana Inkpen. 2008. Semantic text similarity using corpus-based word similarity and string similarity. ACM Transactions on Knowledge Discovery from Data, 2. Heng Ji and Ralph Grishman. 2008. Refining event extraction through cross-document inference. In Proceedings of ACL-08: HLT. Long Jiang, Mo Yu, Ming Zhou, Xiaohua Liu, and Tiejun Zhao. 2011. Target-dependent twitter sentiment classification. In Proceedings of the 49th Annual Meeting of Association for Computational Linguistics. Ou Jin, Nathan N. Liu, Kai Zhao, Yong Yu, and Qiang Yang. 2011. Transferring topical knowledge from auxiliary long texts for short text clustering. In Proceedings of the 20th ACM international conference on Information and knowledge management. Thomas K Landauer, Peter W. Foltz, and Darrell Laham. 1998. An introduction to latent semantic analysis. Discourse Processes, 25. H. J. Lee and Sung Joo Park. 2007. Moners: A news recommender for the mobile web. Expert Syst. Appl., 32(1):143–150. Yuhua Li, David McLean, Zuhair A. Bandar, James D. O’Shea, and Keeley Crockett. 2006. Sentence similarity based on semantic nets and corpus statistics. IEEE Transaction on Knowledge and Data Engineering, 18. Hao Li, Yu Chen, Heng Ji, Smaranda Muresan, and Dequan Zheng. 2012. Combining social cognitive theories with linguistic features for multi-genre sentiment analysis. In In Proceedings of the 26th Pacific Asia Conference on Language, Information and Computation. Xiaohua Liu, Shaodian Zhang, Furu Wei, and Ming Zhou. 2011. Recognizing named entities in tweets. In The Semanic Web: Research and Applications. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Rada Mihalcea, Courtney Corley, and Carlo Strapparava. 2006. Corpus-based and knowledge-based measures of text semantic similarity. In Proceedings of the 21st National Conference on Articial Intelligence. Bo Pang and Lillian Lee. 2004. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In Proceedings of the 42nd Meeting of the Association for Computational Linguistics. Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics. Daniel Ramage, David Hall, Ramesh Nallapati, and Christopher D. Manning. 2009. Labeled lda: A supervised topic model for credit attribution in multilabeled corpora. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing. Daniel Ramage, Susan Dumais, and Dan Liebling. 2010. Characterizing microblogs with topic models. In Proceedings of the Fourth International AAAI Conference on Weblogs and Social Media. Jagan Sankaranarayanan, Hanan Samet, Benjamin E. Teitler, Michael D. Lieberman, and Jon Sperling. 2009. Twitterstand: news in tweets. In Proceedings of the 17th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems. Richard Socher, Eric H. Huang, Jeffrey Pennington, Andrew Y. Ng, and Christopher D. Manning. 2011. Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. In Proceedings of Advances in Neural Information Processing Systems. Michael Speriosui, Nikita Sudan, Sid Upadhyay, and Jason Baldridge. 2011. Twitter polarity classification with label propagation over lexical links and the follower graph. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. Nathan Srebro and Tommi Jaakkola. 2003. Weighted low-rank approximations. In Proceedings of the Twentieth International Conference on Machine Learning. Harald Steck. 2010. Training and testing of recommender systems on data missing not at random. In Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Kristina Toutanova, Dan Klein, Christopher D. Manning, and Yoram Singer. 2003. Feature-rich part-ofspeech tagging with a cyclic dependency network. In HLT-NAACL. 248 George Tsatsaronis, Iraklis Varlamis, and Michalis Vazirgiannis. 2010. Text relatedness based on a word thesaurus. Journal of Articial Intelligence Research, 37. Andranik Tumasjan, Timm Oliver Sprenger, Philipp G. Sandner, and Isabell M. Welpe. 2010. Predicting elections with twitter: What 140 characters reveal about political sentiment. In ICWSM. Xuerui Wang and Andrew McCallum. 2006. Topics over time: a non-markov continuous-time model of topical trends. In In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining. Rui Yan, Mirella Lapata, and Xiaoming Li. 2012. Tweet recommendation with graph co-ranking. In Proceedings of the 24th International Conference on Computational Linguistics. Lei Yang, Tao Sun, Ming Zhang, and Qiaozhu Mei. 2012. We know what @you #tag: does the dual role affect hashtag adoption? In Proceedings of the 21st international conference on World Wide Web. 249
2013
24
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 250–259, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics A computational approach to politeness with application to social factors Cristian Danescu-Niculescu-Mizil∗‡, Moritz Sudhof†, Dan Jurafsky†, Jure Leskovec∗, and Christopher Potts† ∗Computer Science Department, †Linguistics Department ∗†Stanford University, ‡Max Planck Institute SWS cristiand|[email protected], sudhof|jurafsky|[email protected] Abstract We propose a computational framework for identifying linguistic aspects of politeness. Our starting point is a new corpus of requests annotated for politeness, which we use to evaluate aspects of politeness theory and to uncover new interactions between politeness markers and context. These findings guide our construction of a classifier with domain-independent lexical and syntactic features operationalizing key components of politeness theory, such as indirection, deference, impersonalization and modality. Our classifier achieves close to human performance and is effective across domains. We use our framework to study the relationship between politeness and social power, showing that polite Wikipedia editors are more likely to achieve high status through elections, but, once elevated, they become less polite. We see a similar negative correlation between politeness and power on Stack Exchange, where users at the top of the reputation scale are less polite than those at the bottom. Finally, we apply our classifier to a preliminary analysis of politeness variation by gender and community. 1 Introduction Politeness is a central force in communication, arguably as basic as the pressure to be truthful, informative, relevant, and clear (Grice, 1975; Leech, 1983; Brown and Levinson, 1978). Natural languages provide numerous and diverse means for encoding politeness and, in conversation, we constantly make choices about where and how to use these devices. Kaplan (1999) observes that “people desire to be paid respect” and identifies honorifics and other politeness markers, like please, as “the coin of that payment”. In turn, politeness markers are intimately related to the power dynamics of social interactions and are often a decisive factor in whether those interactions go well or poorly (Gyasi Obeng, 1997; Chilton, 1990; Andersson and Pearson, 1999; Rogers and LeeWong, 2003; Holmes and Stubbe, 2005). The present paper develops a computational framework for identifying and characterizing politeness marking in requests. We focus on requests because they involve the speaker imposing on the addressee, making them ideal for exploring the social value of politeness strategies (Clark and Schunk, 1980; Francik and Clark, 1985). Requests also stimulate extensive use of what Brown and Levinson (1987) call negative politeness: speaker strategies for minimizing (or appearing to minimize) the imposition on the addressee, for example, by being indirect (Would you mind) or apologizing for the imposition (I’m terribly sorry, but) (Lakoff, 1973; Lakoff, 1977; Brown and Levinson, 1978). Our investigation is guided by a new corpus of requests annotated for politeness. The data come from two large online communities in which members frequently make requests of other members: Wikipedia, where the requests involve editing and other administrative functions, and Stack Exchange, where the requests center around a diverse range of topics (e.g., programming, gardening, cycling). The corpus confirms the broad outlines of linguistic theories of politeness pioneered by Brown and Levinson (1987), but it also reveals new interactions between politeness markings and the morphosyntactic context. For example, the politeness of please depends on its syntactic position and the politeness markers it co-occurs with. Using this corpus, we construct a politeness classifier with a wide range of domainindependent lexical, sentiment, and dependency features operationalizing key components of po250 liteness theory, including not only the negative politeness markers mentioned above but also elements of positive politeness (gratitude, positive and optimistic sentiment, solidarity, and inclusiveness). The classifier achieves near human-level accuracy across domains, which highlights the consistent nature of politeness strategies and paves the way to using the classifier to study new data. Politeness theory predicts a negative correlation between politeness and the power of the requester, where power is broadly construed to include social status, authority, and autonomy (Brown and Levinson, 1987). The greater the speaker’s power relative to her addressee, the less polite her requests are expected to be: there is no need for her to incur the expense of paying respect, and failing to make such payments can invoke, and hence reinforce, her power. We support this prediction by applying our politeness framework to Wikipedia and Stack Exchange, both of which provide independent measures of social status. We show that polite Wikipedia editors are more likely to achieve high status through elections; however, once elected, they become less polite. Similarly, on Stack Exchange, we find that users at the top of the reputation scale are less polite than those at the bottom. Finally, we briefly address the question of how politeness norms vary across communities and social groups. Our findings confirm established results about the relationship between politeness and gender, and they identify substantial variation in politeness across different programming language subcommunities on Stack Exchange. 2 Politeness data Requests involve an imposition on the addressee, making them a natural domain for studying the inter-connections between linguistic aspects of politeness and social variables. Requests in online communities We base our analysis on two online communities where requests have an important role: the Wikipedia community of editors and the Stack Exchange question-answer community.1 On Wikipedia, to coordinate on the creation and maintenance of the collaborative encyclopedia, editors can interact with each other on user talk-pages;2 re1http://stackexchange.com/about 2http://en.wikipedia.org/wiki/ Wikipedia:User_pages quests posted on a user talk-page, although public, are generally directed to the owner of the talkpage. On Stack Exchange, users often comment on existing posts requesting further information or proposing edits; these requests are generally directed to the authors of the original posts. Both communities are not only rich in userto-user requests, but these requests are also part of consequential conversations, not empty social banter; they solicit specific information or concrete actions, and they expect a response. Politeness annotation Computational studies of politeness, or indeed any aspect of linguistic pragmatics, demand richly labeled data. We therefore label a large portion of our request data (over 10,000 utterances) using Amazon Mechanical Turk (AMT), creating the largest corpus with politeness annotations (see Table 1 for details).3 We choose to annotate requests containing exactly two sentences, where the second sentence is the actual request (and ends with a question mark). This provides enough context to the annotators while also controlling for length effects. Each annotator was instructed to read a batch of 13 requests and consider them as originating from a co-worker by email. For each request, the annotator had to indicate how polite she perceived the request to be by using a slider with values ranging from “very impolite” to “very polite”.4 Each request was labeled by five different annotators. We vetted annotators by restricting their residence to be in the U.S. and by conducting a linguistic background questionnaire. We also gave them a paraphrasing task shown to be effective for verifying and eliciting linguistic attentiveness (Munro et al., 2010), and we monitored the annotation job and manually filtered out annotators who submitted uniform or seemingly random annotations. Because politeness is highly subjective and annotators may have inconsistent scales, we applied the standard z-score normalization to each worker’s scores. Finally, we define the politeness score (henceforth politeness) of a request as the average of the five scores assigned by the annotators. The distribution of resulting request scores (shown in Figure 1) has an average of 0 and stan3Publicly available at http://www.mpi-sws.org/ ˜cristian/Politeness.html 4We used non-categorical ratings for finer granularity and to help account for annotators’ different perception scales. 251 domain #requests #annotated #annotators Wiki 35,661 4,353 219 SE 373,519 6,604 212 Table 1: Summary of the request data and its politeness annotations. Figure 1: Distribution of politeness scores. Positive scores indicate requests perceived as polite. dard deviation of 0.7 for both domains; positive values correspond to polite requests (i.e., requests with normalized annotations towards the “very polite” extreme) and negative values to impolite requests. A summary of all our request data is shown in Table 1. Inter-annotator agreement To evaluate the reliability of the annotations we measure the interannotator agreement by computing, for each batch of 13 documents that were annotated by the same set of 5 users, the mean pairwise correlation of the respective scores. For reference, we compute the same quantities after randomizing the scores by sampling from the observed distribution of politeness scores. As shown in Figure 2, the labels are coherent and significantly different from the randomized procedure (p < 0.0001 according to a Wilcoxon signed rank test).5 Binary perception Although we did not impose a discrete categorization of politeness, we acknowledge an implicit binary perception of the phenomenon: whenever an annotator moved a slider in one direction or the other, she made a binary politeness judgment. However, the bound5The commonly used Cohen/Fleiss Kappa agreement measures are not suitable for this type of annotation, in which labels are continuous rather than categorical. Figure 2: Inter-annotator pairwise correlation, compared to the same measure after randomizing the scores. Quartile: 1st 2nd 3rd 4th Wiki 62% 8% 3% 51% SE 37% 4% 6% 46% Table 2: The percentage of requests for which all five annotators agree on binary politeness. The 4th quartile contains the requests with the top 25% politeness scores in the data. (For reference, randomized scoring yields agreement percentages of <20% for all quartiles.) ary between somewhat polite and somewhat impolite requests can be blurry. To test this intuition, we break the set of annotated requests into four groups, each corresponding to a politeness score quartile. For each quartile, we compute the percentage of requests for which all five annotators made the same binary politeness judgment. As shown in Table 2, full agreement is much more common in the 1st (bottom) and 4th (top) quartiles than in the middle quartiles. This suggests that the politeness scores assigned to requests that are only somewhat polite or somewhat impolite are less reliable and less tied to an intuitive notion of binary politeness. This discrepancy motivates our choice of classes in the prediction experiments (Section 4) and our use of the top politeness quartile (the 25% most polite requests) as a reference in our subsequent discussion. 3 Politeness strategies As we mentioned earlier, requests impose on the addressee, potentially placing her in social peril if she is unwilling or unable to comply. Requests therefore naturally give rise to the negative po252 liteness strategies of Brown and Levinson (1987), which are attempts to mitigate these social threats. These strategies are prominent in Table 3, which describes the core politeness markers we analyzed in our corpus of Wikipedia requests. We do not include the Stack Exchange data in this analysis, reserving it as a “test community” for our prediction task (Section 4). Requests exhibiting politeness markers are automatically extracted using regular expression matching on the dependency parse obtained by the Stanford Dependency Parser (de Marneffe et al., 2006), together with specialized lexicons. For example, for the hedges marker (Table 3, line 19), we match all requests containing a nominal subject dependency edge pointing out from a hedge verb from the hedge list created by Hyland (2005). For each politeness strategy, Table 3 shows the average politeness score of the respective requests (as described in Section 2; positive numbers indicate polite requests), and their top politeness quartile membership (i.e., what percentage fall within the top quartile of politeness scores). As discussed at the end of Section 2, the top politeness quartile gives a more robust and more intuitive measure of politeness. For reference, a random sample of requests will have a 0 politeness score and a 25% top quartile membership; in both cases, larger numbers indicate higher politeness. Gratitude and deference (lines 1–2) are ways for the speaker to incur a social cost, helping to balance out the burden the request places on the addressee. Adopting Kaplan (1999)’s metaphor, these are the coin of the realm when it comes to paying the addressee respect. Thus, they are indicators of positive politeness. Terms from the sentiment lexicon (Liu et al., 2005) are also tools for positive politeness, either by emphasizing a positive relationship with the addressee (line 4), or being impolite by using negative sentiment that damages this positive relationship (line 5). Greetings (line 3) are another way to build a positive relationship with the addressee. The remainder of the cues in Table 3 are negative politeness strategies, serving the purpose of minimizing, at least in appearance, the imposition on the addressee. Apologizing (line 6) deflects the social threat of the request by attuning to the imposition itself. Being indirect (line 9) is another way to minimize social threat. This strategy allows the speaker to avoid words and phrases conventionally associated with requests. First-person plural forms like we and our (line 15) are also ways of being indirect, as they create the sense that the burden of the request is shared between speaker and addressee (We really should ...). Though indirectness is not invariably interpreted as politeness marking (Blum-Kulka, 2003), it is nonetheless a reliable marker of it, as our scores indicate. What’s more, direct variants (imperatives, statements about the addressee’s obligations) are less polite (lines 10–11). Indirect strategies also combine with hedges (line 19) conveying that the addressee is unlikely to accept the burden (Would you by any chance ...?, Would it be at all possible ...?). These too serve to provide the addressee with a face-saving way to deny the request. We even see subtle effects of modality at work here: the irrealis, counterfactual forms would and could are more polite than their ability (dispositional) or future-oriented variants can and will; compare lines 12 and 13. This parallels the contrast between factuality markers (impolite; line 20) and hedging (polite; line 19). Many of these features are correlated with each other, in keeping with the insight of Brown and Levinson (1987) that politeness markers are often combined to create a cumulative effect of increased politeness. Our corpora also highlight interactions that are unexpected (or at least unaccounted for) on existing theories of politeness. For example, sentence-medial please is polite (line 7), presumably because of its freedom to combine with other negative politeness strategies (Could you please ... ). In contrast, sentence-initial please is impolite (line 8), because it typically signals a more direct strategy (Please do this), which can make the politeness marker itself seem insincere. We see similar interactions between pronominal forms and syntactic structure: sentence-initial you is impolite (You need to ...), whereas sentencemedial you is often part of the indirect strategies we discussed above (Would/Could you ...). 4 Predicting politeness We now show how our linguistic analysis can be used in a machine learning model for automatically classifying requests according to politeness. A classifier can help verify the predictive power, robustness, and domain-independent generality of the linguistic strategies of Section 3. Also, by providing automatic politeness judgments for large 253 Strategy Politeness In top quartile Example 1. Gratitude 0.87*** 78%*** I really appreciate that you’ve done them. 2. Deference 0.78*** 70%*** Nice work so far on your rewrite. 3. Greeting 0.43*** 45%*** Hey, I just tried to ... 4. Positive lexicon 0.12*** 32%*** Wow! / This is a great way to deal... 5. Negative lexicon -0.13*** 22%** If you’re going to accuse me ... 6. Apologizing 0.36*** 53%*** Sorry to bother you ... 7. Please 0.49*** 57%*** Could you please say more... 8. Please start −0.30* 22% Please do not remove warnings ... 9. Indirect (btw) 0.63*** 58%** By the way, where did you find ... 10. Direct question −0.27*** 15%*** What is your native language? 11. Direct start −0.43*** 9%*** So can you retrieve it or not? 12. Counterfactual modal 0.47*** 52%*** Could/Would you ... 13. Indicative modal 0.09 27% Can/Will you ... 14. 1st person start 0.12*** 29%** I have just put the article ... 15. 1st person pl. 0.08* 27% Could we find a less complex name ... 16. 1st person 0.08*** 28%*** It is my view that ... 17. 2nd person 0.05*** 30%*** But what’s the good source you have in mind? 18. 2nd person start −0.30*** 17%** You’ve reverted yourself ... 19. Hedges 0.14*** 28% I suggest we start with ... 20. Factuality −0.38*** 13%*** In fact you did link, ... Table 3: Positive (1-5) and negative (6–20) politeness strategies and their relation to human perception of politeness. For each strategy we show the average (human annotated) politeness scores for the requests exhibiting that strategy (compare with 0 for a random sample of requests; a positive number indicates the strategy is perceived as being polite), as well as the percentage of requests exhibiting the respective strategy that fall in the top quartile of politeness scores (compare with 25% for a random sample of requests). Throughout the paper: for politeness scores, statistical significance is calculated by comparing the set of requests exhibiting the strategy with the rest using a Mann-Whitney-Wilcoxon U test; for top quartile membership a binomial test is used. amounts of new data on a scale unfeasible for human annotation, it can also enable a detailed analysis of the relation between politeness and social factors (Section 5). Task setup To evaluate the robustness and domain-independence of the analysis from Section 3, we run our prediction experiments on two very different domains. We treat Wikipedia as a “development domain” since we used it for developing and identifying features and for training our models. Stack Exchange is our “test domain” since it was not used for identifying features. We take the model (features and weights) trained on Wikipedia and use them to classify requests from Stack Exchange. We consider two classes of requests: polite and impolite, defined as the top and, respectively, bottom quartile of requests when sorted by their politeness score (based on the binary notion of politeness discussed in Section 2). The classes are therefore balanced, with each class consisting of 1,089 requests for the Wikipedia domain and 1,651 requests for the Stack Exchange domain. We compare two classifiers — a bag of words classifier (BOW) and a linguistically informed classifier (Ling.) — and use human labelers as a reference point. The BOW classifier is an SVM using a unigram feature representation.6 We consider this to be a strong baseline for this new 6Unigrams appearing less than 10 times are excluded. 254 classification task, especially considering the large amount of training data available. The linguistically informed classifier (Ling.) is an SVM using the linguistic features listed in Table 3 in addition to the unigram features. Finally, to obtain a reference point for the prediction task we also collect three new politeness annotations for each of the requests in our dataset using the same methodology described in Section 2. We then calculate human performance on the task (Human) as the percentage of requests for which the average score from the additional annotations matches the binary politeness class of the original annotations (e.g., a positive score corresponds to the polite class). Classification results We evaluate the classifiers both in an in-domain setting, with a standard leave-one-out cross validation procedure, and in a cross-domain setting, where we train on one domain and test on the other (Table 4). For both our development and our test domains, and in both the in-domain and cross-domain settings, the linguistically informed features give 3-4% absolute improvement over the bag of words model. While the in-domain results are within 3% of human performance, the greater room for improvement in the cross-domain setting motivates further research on linguistic cues of politeness. The experiments in this section confirm that our theory-inspired features are indeed effective in practice, and generalize well to new domains. In the next section we exploit this insight to automatically annotate a much larger set of requests (about 400,000) with politeness labels, enabling us to relate politeness to several social variables and outcomes. For new requests, we use class probability estimates obtained by fitting a logistic regression model to the output of the SVM (Witten and Frank, 2005) as predicted politeness scores (with values between 0 and 1; henceforth politeness, by abuse of language). 5 Relation to social factors We now apply our framework to studying the relationship between politeness and social variables, focussing on social power dynamics. Encouraged by the close-to-human performance of our in-domain classifiers, we use them to assign politeness labels to our full dataset and then compare these labels to independent measures of power and status in our data. The results closely match those obtained with human-labeled data alone, thereby In-domain Cross-domain Train Wiki SE Wiki SE Test Wiki SE SE Wiki BOW 79.84% 74.47% 64.23% 72.17% Ling. 83.79% 78.19% 67.53% 75.43% Human 86.72% 80.89% 80.89% 86.72% Table 4: Accuracies of our two classifiers for Wikipedia (Wiki) and Stack Exchange (SE), for in-domain and cross-domain settings. Human performance is included as a reference point. The random baseline performance is 50%. supporting the use of computational methods to pursue questions about social variables. 5.1 Relation to social outcome Earlier, we characterized politeness markings as currency used to pay respect. Such language is therefore costly in a social sense, and, relatedly, tends to incur costs in terms of communicative efficiency (Van Rooy, 2003). Are these costs worth paying? We now address this question by studying politeness in the context of the electoral system of the Wikipedia community of editors. Among Wikipedia editors, status is a salient social variable (Anderson et al., 2012). Administrators (admins) are editors who have been granted certain rights, including the ability to block other editors and to protect or delete articles.7 Admins have a higher status than common editors (non-admins), and this distinction seems to be widely acknowledged by the community (Burke and Kraut, 2008b; Leskovec et al., 2010; DanescuNiculescu-Mizil et al., 2012). Aspiring editors become admins through public elections,8 so we know when the status change from non-admin to admins occurred and can study users’ language use in relation to that time. To see whether politeness correlates with eventual high status, we compare, in Table 5, the politeness levels of requests made by users who will eventually succeed in becoming administrators (Eventual status: Admins) with requests made by users who are not admins (Non-admins).9 We observe that admins-to-be are significantly more po7http://en.wikipedia.org/wiki/ Wikipedia:Administrators 8http://en.wikipedia.org/wiki/ Wikipedia:Requests_for_adminship 9We consider only requests made up to one month before the election, to avoid confusion with pre-election behavior. 255 Eventual status Politeness Top quart. Admins 0.46** 30%*** Non-admins 0.39*** 25% Failed 0.37** 22% Table 5: Politeness and status. Editors who will eventually become admins are more polite than non-admins (p<0.001 according to a MannWhitney-Wilcoxon U test) and than editors who will eventually fail to become admins (p<0.001). Out of their requests, 30% are rated in the top politeness quartile (significantly more than the 25% of a random sample; p<0.001 according to a binomial test). This analysis was conducted on 31k requests (1.4k for Admins, 28.9k for Non-admins, 652 for Failed). lite than non-admins. One might wonder whether this merely reflects the fact that not all users aspire to become admins, and those that do are more polite. To address this, we also consider users who ran for adminship but did not earn community approval (Eventual status: Failed). These users are also significantly less polite than their successful counterparts, indicating that politeness indeed correlates with a positive social outcome here. 5.2 Politeness and power We expect a rise in status to correlate with a decline in politeness (as predicted by politeness theory, and discussed in Section 1). The previous section does not test this hypothesis, since all editors compared in Table 5 had the same (non-admin) status when writing the requests. However, our data does provide three ways of testing this hypothesis. First, after the adminship elections, successful editors get a boost in power by receiving admin privileges. Figure 3 shows that this boost is mirrored by a significant decrease in politeness (blue, diamond markers). Losing an election has the opposite effect on politeness (red, circle markers), perhaps as a consequence of reinforced low status. Second, Stack Exchange allows us to test more situational power effects.10 On the site, users request, from the community, information they are lacking. This informational asymmetry between the question-asker and his audience puts him at 10We restrict all experiments in this section to the largest subcommunity of Stack Exchange, namely Stack Overflow. Before election Election After election 0.41 0.37 0.39 0.46 Predicted politeness scores Successful candidates Failed candidates Figure 3: Successful and failed candidates before and after elections. Editors that will eventually succeed (diamond marker) are significantly more polite than those that will fail (circle markers). Following the elections, successful editors become less polite while unsuccessful editors become more polite. a social disadvantage. We therefore expect the question-asker to be more polite than the people who respond. Table 6 shows that this expectation is born out: comments posted to a thread by the original question-asker are more polite than those posted by other users. Role Politeness Top quart. Question-asker 0.65*** 32%*** Answer-givers 0.52*** 20%*** Table 6: Politeness and dependence. Requests made in comments posted by the question-asker are significantly more polite than the other requests. Analysis conducted on 181k requests (106k for question-askers, 75k for answer-givers). Third, Stack Exchange allows us to examine power in the form of authority, through the community’s reputation system. Again, we see a negative correlation between politeness and power, even after controlling for the role of the user making the requests (i.e., Question-asker or Answergiver). Table 7 summarizes the results.11 Human validation The above analyses are based on predicted politeness from our classifier. This allows us to use the entire request data cor11Since our data does not contain time stamps for reputation scores, we only consider requests that were issued in the six months prior to the available snapshot. 256 Reputation level Politeness Top quart. Low reputation 0.68*** 27%*** Middle reputation 0.66*** 25% High reputation 0.64*** 23%*** Table 7: Politeness and Stack Exchange reputation (texts by question-askers only). High-reputation users are less polite. Analysis conducted on 25k requests (4.5k low, 12.5k middle, 8.4k high). pus to test our hypotheses and to apply precise controls to our experiments (such as restricting our analysis to question-askers in the reputation experiment). In order to validate this methodology, we turned again to human annotation: we collected additional politeness annotation for the types of requests involved in the newly designed experiments. When we re-ran our experiments on human-labeled data alone we obtained the same qualitative results, with statistical significance always lower than 0.01.12 Prediction-based interactions The human validation of classifier-based results suggests that our prediction framework can be used to explore differences in politeness levels across factors of interest, such as communities, geographical regions and gender, even where gathering sufficient human-annotated data is infeasible. We mention just a few such preliminary results here: (i) Wikipedians from the U.S. Midwest are most polite (when compared to other census-defined regions), (ii) female Wikipedians are generally more polite (consistent with prior studies in which women are more polite in a variety of domains; (Herring, 1994)), and (iii) programming language communities on Stack Exchange vary significantly by politeness (Table 8; full disclosure: our analyses were conducted in Python). 6 Related work Politeness has been a central concern of modern pragmatic theory since its inception (Grice, 1975; Lakoff, 1973; Lakoff, 1977; Leech, 1983; Brown and Levinson, 1978), because it is a source of pragmatic enrichment, social meaning, and cultural variation (Harada, 1976; Matsumoto, 1988; 12However, due to the limited size of the human-labeled data, we could not control for the role of the user in the Stack Exchange reputation experiment. PL name Politeness Top quartile Python 0.47*** 23% Perl 0.49 24% PHP 0.51 24% Javascript 0.53** 26%** Ruby 0.59*** 28%* Table 8: Politeness of requests from different language communities on Stack Exchange. Ide, 1989; Blum-Kulka and Kasper, 1990; BlumKulka, 2003; Watts, 2003; Byon, 2006). The starting point for most research is the theory of Brown and Levinson (1987). Aspects of this theory have been explored from game-theoretic perspectives (Van Rooy, 2003) and implemented in language generation systems for interactive narratives (Walker et al., 1997), cooking instructions, (Gupta et al., 2007), translation (Faruqui and Pado, 2012), spoken dialog (Wang et al., 2012), and subjectivity analysis (Abdul-Mageed and Diab, 2012), among others. In recent years, politeness has been studied in online settings. Researchers have identified variation in politeness marking across different contexts and media types (Herring, 1994; Brennan and Ohaeri, 1999; Duthler, 2006) and between different social groups (Burke and Kraut, 2008a). The present paper pursues similar goals using orders of magnitude more data, which facilitates a fuller survey of different politeness strategies. Politeness marking is one aspect of the broader issue of how language relates to power and status, which has been studied in the context of workplace discourse (Bramsen et al., ; Diehl et al., 2007; Peterson et al., 2011; Prabhakaran et al., 2012; Gilbert, 2012; McCallum et al., 2007) and social networking (Scholand et al., 2010). However, this research focusses on domain-specific textual cues, whereas the present work seeks to leverage domain-independent politeness cues, building on the literature on how politeness affects worksplace social dynamics and power structures (Gyasi Obeng, 1997; Chilton, 1990; Andersson and Pearson, 1999; Rogers and Lee-Wong, 2003; Holmes and Stubbe, 2005). Burke and Kraut (2008b) study the question of how and why specific individuals rise to administrative positions on Wikipedia, and Danescu-Niculescu-Mizil et al. (2012) show that power differences on Wikipedia 257 are revealed through aspects of linguistic accommodation. The present paper complements this work by revealing the role of politeness in social outcomes and power relations. 7 Conclusion We construct and release a large collection of politeness-annotated requests and use it to evaluate key aspects of politeness theory. We build a politeness classifier that achieves near-human performance and use it to explore the relation between politeness and social factors such as power, status, gender, and community membership. We hope the publicly available collection of annotated requests enables further study of politeness and its relation to social factors, as this paper has only begun to explore this area. Acknowledgments We thank Jean Wu for running the AMT annotation task, and all the participating turkers. We thank Diana Minculescu and the anonymous reviewers for their helpful comments. This work was supported in part by NSF IIS-1016909, CNS-1010921, IIS-1149837, IIS-1159679, ARO MURI, DARPA SMISC, Okawa Foundation, Docomo, Boeing, Allyes, Volkswagen, Intel, Alfred P. Sloan Fellowship, the Microsoft Faculty Fellowship, the Gordon and Dailey Pattee Faculty Fellowship, and the Center for Advanced Study in the Behavioral Sciences at Stanford. References Muhammad Abdul-Mageed and Mona Diab. 2012. AWATIF: A multi-genre corpus for Modern Standard Arabic subjectivity and sentiment analysis. In Proceedings of LREC, pages 3907–3914. Ashton Anderson, Daniel Huttenlocher, Jon Kleinberg, and Jure Leskovec. 2012. Effects of user similarity in social media. In Proceedings of WSDM, pages 703–712. Lynne M. Andersson and Christine M. Pearson. 1999. Tit for tat? the spiraling effect of incivility in the workplace. The Academy of Management Review, 24(3):452–471. Shoshana Blum-Kulka and Gabriele Kasper. 1990. Special issue on politeness. Journal of Pragmatics, 144(2). Shoshana Blum-Kulka. 2003. Indirectness and politeness in requests: Same or different? Journal of Pragmatics, 11(2):131–146. Philip Bramsen, Martha Escobar-Molana, Ami Patel, and Rafael Alonso. Extracting social power relationships from natural language. In Proceedings of ACL, pages 773–782. Susan E Brennan and Justina O Ohaeri. 1999. Why do electronic conversations seem less polite? the costs and benefits of hedging. SIGSOFT Softw. Eng. Notes, 24(2):227–235. Penelope Brown and Stephen C. Levinson. 1978. Universals in language use: Politeness phenomena. In Esther N. Goody, editor, Questions and Politeness: Strategies in Social Interaction, pages 56–311, Cambridge. Cambridge University Press. Penelope Brown and Stephen C Levinson. 1987. Politeness: some universals in language usage. Cambridge University Press. Moira Burke and Robert Kraut. 2008a. Mind your Ps and Qs: the impact of politeness and rudeness in online communities. In Proceedings of CSCW, pages 281–284. Moira Burke and Robert Kraut. 2008b. Taking up the mop: identifying future wikipedia administrators. In CHI ’08 extended abstracts on Human factors in computing systems, pages 3441–3446. Andrew Sangpil Byon. 2006. The role of linguistic indirectness and honorifics in achieving linguistic politeness in Korean requests. Journal of Politeness Research, 2(2):247–276. Paul Chilton. 1990. Politeness, politics, and diplomacy. Discourse and Society, 1(2):201–224. Herbert H. Clark and Dale H. Schunk. 1980. Polite responses to polite requests. Cognition, 8(1):111– 143. Cristian Danescu-Niculescu-Mizil, Lillian Lee, Bo Pang, and Jon Kleinberg. 2012. Echoes of power: Language effects and power differences in social interaction. In Proceedings of WWW, pages 699–708. Marie-Catherine de Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In Proceedings of LREC, pages 449–454. Christopher P. Diehl, Galileo Namata, and Lise Getoor. 2007. Relationship identification for social network discovery. In Proceedings of the AAAI Workshop on Enhanced Messaging, pages 546–552. Kirk W Duthler. 2006. The Politeness of Requests Made Via Email and Voicemail: Support for the Hyperpersonal Model. Journal of Computer-Mediated Communication, 11(2):500–521. Manaal Faruqui and Sebastian Pado. 2012. Towards a model of formal and informal address in english. In Proceedings of EACL, pages 623–633. 258 Elen P. Francik and Herbert H. Clark. 1985. How to make requests that overcome obstacles to compliance. Journal of Memory and Language, 24:560– 568. Eric Gilbert. 2012. Phrases that signal workplace hierarchy. In Proceedings of CSCW, pages 1037–1046. H. Paul Grice. 1975. Logic and conversation. In Peter Cole and Jerry Morgan, editors, Syntax and Semantics, volume 3: Speech Acts, pages 43–58. Academic Press, New York. S Gupta, M Walker, and D Romano. 2007. How rude are you?: Evaluating politeness and affect in interaction. Affective Computing and Intelligent Interaction, pages 203–217. Samuel Gyasi Obeng. 1997. Language and politics: Indirectness in political discourse. Discourse and Society, 8(1):49–83. S. I. Harada. 1976. Honorifics. In Masayoshi Shibatani, editor, Syntax and Semantics, volume 5: Japanese Generative Grammar, pages 499–561. Academic Press, New York. Susan Herring. 1994. Politeness in computer culture: Why women thank and men flame. In Cultural performances: Proceedings of the third Berkeley women and language conference, volume 278, page 94. Janet Holmes and Maria Stubbe. 2005. Power and Politeness in the Workplace: A Sociolinguistic Analysis of Talk at Work. Longman, London. Ken Hyland. 2005. Metadiscourse: Exploring Interaction in Writing. Continuum, London and New York. Sachiko Ide. 1989. Formal forms and discernment: Two neglected aspects of universals of linguistic politeness. Multilingua, 8(2–3):223–248. David Kaplan. 1999. What is meaning? Explorations in the theory of Meaning as Use. Brief version — draft 1. Ms., UCLA. Robin Lakoff. 1973. The logic of politeness; or, miding your P’s and Q’s. In Proceedings of the 9th Meeting of the Chicago Linguistic Society, pages 292–305. Robin Lakoff. 1977. What you can do with words: Politeness, pragmatics and performatives. In Proceedings of the Texas Conference on Performatives, Presuppositions and Implicatures, pages 79–106. Geoffrey N. Leech. 1983. Principles of Pragmatics. Longman, London and New York. Jure Leskovec, Daniel Huttenlocher, and Jon Kleinberg. 2010. Governance in Social Media: A case study of the Wikipedia promotion process. In Proceedings of ICWSM, pages 98–105. Bing Liu, Minqing Hu, and Junsheng Cheng. 2005. Opinion Observer: analyzing and comparing opinions on the Web. In Proceedings of WWW, pages 342–351. Yoshiko Matsumoto. 1988. Reexamination of the universality of face: Politeness phenomena in Japanese. Journal of Pragmatics, 12(4):403–426. Andrew McCallum, Xuerui Wang, and Andr’es Corrada-Emmanuel. 2007. Topic and role discovery in social networks with experiments on Enron and academic email. Journal of Artificial Intelligence Research, 30(1):249–272. Robert Munro, Steven Bethard, Victor Kuperman, Vicky Tzuyin Lai, Robin Melnick, Christopher Potts, Tyler Schnoebelen, and Harry Tily. 2010. Crowdsourcing and language studies: the new generation of linguistic data. In Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk, pages 122–130. Kelly Peterson, Matt Hohensee, and Fei Xia. 2011. Email formality in the workplace: A case study on the enron corpus. In Proceedings of the ACL Workshop on Language in Social Media, pages 86–95. Vinodkumar Prabhakaran, Owen Rambow, and Mona Diab. 2012. Predicting Overt Display of Power in Written Dialogs. In Proceedings of NAACL-HLT, pages 518–522. Priscilla S. Rogers and Song Mei Lee-Wong. 2003. Reconceptualizing politeness to accommodate dynamic tensions in subordinate-to-superior reporting. Journal of Business and Technical Communication, 17(4):379–412. Andrew J. Scholand, Yla R. Tausczik, and James W. Pennebaker. 2010. Social language network analysis. In Proceedings of CSCW, pages 23–26. Robert Van Rooy. 2003. Being polite is a handicap: Towards a game theoretical analysis of polite linguistic behavior. In Proceedings of TARK, pages 45–58. Marilyn A Walker, Janet E Cahn, and Stephen J Whittaker. 1997. Improvising linguistic style: social and affective bases for agent personality. In Proceedings of AGENTS, pages 96–105. William Yang Wang, Samantha Finkelstein, Amy Ogan, Alan W. Black, and Justine Cassell. 2012. ”love ya, jerkface”: Using sparse log-linear models to build positive and impolite relationships with teens. In Proceedings of SIGDIAL, pages 20–29. Richard J. Watts. 2003. Politeness. Cambridge University Press, Cambridge. Ian H Witten and Eibe Frank. 2005. Data Mining: Practical machine learning tools and techniques. Morgan Kaufmann. 259
2013
25
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 260–269, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Modeling Thesis Clarity in Student Essays Isaac Persing and Vincent Ng Human Language Technology Research Institute University of Texas at Dallas Richardson, TX 75083-0688 {persingq,vince}@hlt.utdallas.edu Abstract Recently, researchers have begun exploring methods of scoring student essays with respect to particular dimensions of quality such as coherence, technical errors, and relevance to prompt, but there is relatively little work on modeling thesis clarity. We present a new annotated corpus and propose a learning-based approach to scoring essays along the thesis clarity dimension. Additionally, in order to provide more valuable feedback on why an essay is scored as it is, we propose a second learning-based approach to identifying what kinds of errors an essay has that may lower its thesis clarity score. 1 Introduction Automated essay scoring, the task of employing computer technology to evaluate and score written text, is one of the most important educational applications of natural language processing (NLP) (see Shermis and Burstein (2003) and Shermis et al. (2010) for an overview of the state of the art in this task). A major weakness of many existing scoring engines such as the Intelligent Essay AssessorTM(Landauer et al., 2003) is that they adopt a holistic scoring scheme, which summarizes the quality of an essay with a single score and thus provides very limited feedback to the writer. In particular, it is not clear which dimension of an essay (e.g., style, coherence, relevance) a score should be attributed to. Recent work addresses this problem by scoring a particular dimension of essay quality such as coherence (Miltsakaki and Kukich, 2004), technical errors, Relevance to Prompt (Higgins et al., 2004), and organization (Persing et al., 2010). Essay grading software that provides feedback along multiple dimensions of essay quality such as E-rater/Criterion (Attali and Burstein, 2006) has also begun to emerge. Nevertheless, there is an essay scoring dimension for which few computational models have been developed — thesis clarity. Thesis clarity refers to how clearly an author explains the thesis of her essay, i.e., the position she argues for with respect to the topic on which the essay is written.1 An essay with a high thesis clarity score presents its thesis in a way that is easy for the reader to understand, preferably but not necessarily directly, as in essays with explicit thesis sentences. It additionally contains no errors such as excessive misspellings that make it more difficult for the reader to understand the writer’s purpose. Our goals in this paper are two-fold. First, we aim to develop a computational model for scoring the thesis clarity of student essays. Because there are many reasons why an essay may receive a low thesis clarity score, our second goal is to build a system for determining why an essay receives its score. We believe the feedback provided by this system will be more informative to a student than would a thesis clarity score alone, as it will help her understand which aspects of her writing need to be improved in order to better convey her thesis. To this end, we identify five common errors that impact thesis clarity, and our system’s purpose is to determine which of these errors occur in a given essay. We evaluate our thesis clarity scoring model and error identification system on a data set of 830 essays annotated with both thesis clarity scores and errors. In sum, our contributions in this paper are threefold. First, we develop a scoring model and error identification system for the thesis clarity dimension on student essays. Second, we use features explicitly designed for each of the identified error 1An essay’s thesis is the overall message of the entire essay. This concept is unbound from the the concept of thesis sentences, as even an essay that never explicitly states its thesis in any of its sentences may still have an overall message that can be inferred from the arguments it makes. 260 Topic Languages Essays Most university degrees are theoretical and do not prepare students for the real world. They are therefore of very little value. 13 131 The prison system is outdated. No civilized society should punish its criminals: it should rehabilitate them. 11 80 In his novel Animal Farm, George Orwell wrote “All men are equal but some are more equal than others.” How true is this today? 10 64 Table 1: Some examples of writing topics. types in order to train our scoring model, in contrast to many existing systems for other scoring dimensions, which use more general features developed without the concept of error classes. Third, we make our data set consisting of thesis clarity annotations of 830 essays publicly available in order to stimulate further research on this task. Since progress in thesis clarity modeling is hindered in part by the lack of a publicly annotated corpus, we believe that our data set will be a valuable resource to the NLP community. 2 Corpus Information We use as our corpus the 4.5 million word International Corpus of Learner English (ICLE) (Granger et al., 2009), which consists of more than 6000 essays written by university undergraduates from 16 countries and 16 native languages who are learners of English as a Foreign Language. 91% of the ICLE texts are argumentative. We select a subset consisting of 830 argumentative essays from the ICLE to annotate and use for training and testing of our models of essay thesis clarity. Table 1 shows three of the thirteen topics selected for annotation. Fifteen native languages are represented in the set of essays selected for annotation. 3 Corpus Annotation For each of the 830 argumentative essays, we ask two native English speakers to (1) score it along the thesis clarity dimension and (2) determine the subset of the five pre-defined errors that detracts from the clarity of its thesis. Scoring. Annotators evaluate the clarity of each essay’s thesis using a numerical score from 1 to 4 at half-point increments (see Table 2 for a description of each score). This contrasts with previous work on essay scoring, where the corpus is Score Description of Thesis Clarity 4 essay presents a very clear thesis and requires little or no clarification 3 essay presents a moderately clear thesis but could benefit from some clarification 2 essay presents an unclear thesis and would greatly benefit from further clarification 1 essay presents no thesis of any kind and it is difficult to see what the thesis could be Table 2: Descriptions of the meaning of scores. annotated with a binary decision (i.e., good or bad) for a given scoring dimension (e.g., Higgins et al. (2004)). Hence, our annotation scheme not only provides a finer-grained distinction of thesis clarity (which can be important in practice), but also makes the prediction task more challenging. To ensure consistency in annotation, we randomly select 100 essays to have graded by both annotators. Analysis of these essays reveals that, though annotators only exactly agree on the thesis clarity score of an essay 36% of the time, the scores they apply are within 0.5 points in 62% of essays and within 1.0 point in 85% of essays. Table 3 shows the number of essays that receive each of the seven scores for thesis clarity. score 1.0 1.5 2.0 2.5 3.0 3.5 4.0 essays 4 9 52 78 168 202 317 Table 3: Distribution of thesis clarity scores. Error identification. To identify what kinds of errors make an essay’s thesis unclear, we ask one of our annotators to write 1–4 sentence critiques of thesis clarity on 527 essays, and obtain our list of five common error classes by categorizing the things he found to criticize. We present our annotators with descriptions of these five error classes (see Table 4), and ask them to assign zero or more of the error types to each essay. It is important to note that we ask our annotators to mark an essay with one of these errors only when the error makes the thesis less clear. So for example, an essay whose thesis is irrelevant to the prompt but is explicitly and otherwise clearly stated would not be marked as having a Relevance to Prompt error. If the irrelevant thesis is stated in such a way that its inapplicability to the prompt causes the reader to be confused about what the essay’s purpose is, however, then the essay would be assigned a Relevance to Prompt error. To measure inter-annotator agreement on error identification, we ask both annotators to identify 261 Id Error Description CP Confusing Phrasing The thesis is phrased oddly, making it hard to understand the writer’s point. IPR Incomplete Prompt Response The thesis seems to leave some part of a multi-part prompt unaddressed. R Relevance to Prompt The apparent thesis’s weak relation to the prompt causes confusion. MD Missing Details The thesis leaves out important detail needed to understand the writer’s point. WP Writer Position The thesis describes a position on the topic without making it clear that this is the position the writer supports. Table 4: Descriptions of thesis clarity errors. the errors in the same 100 essays that were doublyannotated with thesis clarity scores. We then compute Cohen’s Kappa (Carletta, 1996) on each error from the two sets of annotations, obtaining an average Kappa value of 0.75, which indicates fair agreement. Table 5 shows the number of essays assigned to each of the five thesis clarity errors. As we can see, Confusing Phrasing, Incomplete Prompt Response, and Relevance to Prompt are the major error types. error CP IPR R MD WP essays 152 123 142 47 39 Table 5: Distribution of thesis clarity errors. Relationship between clarity scores and error classes. To determine the relationship between thesis clarity scores and the five error classes, we train a linear SVM regressor using the SVMlight software package (Joachims, 1999) with the five error types as independent variables and the reduction in thesis clarity score due to errors as the dependent variable. More specifically, each training example consists of a target, which we set to the essay’s thesis clarity score minus 4.0, and six binary features, each of the first five representing the presence or absence of one of the five errors in the essay, and the sixth being a bias feature which we always set to 1. Representing the reduction in an essay’s thesis clarity score with its thesis clarity score minus 4.0 allows us to more easily interpret the error and bias weights of the trained system, as under this setup, each error’s weight should be a negative number reflecting how many points an essay loses due to the presence of that error. The bias feature allows for the possibility that an essay may lose points from its thesis clarity score for problems not accounted for in our five error classes. By setting this bias feature to 1, we tell our learner that an essay’s default score may be less than 4.0 because these other problems may lower the average score of otherwise perfect essays. After training, we examined the weight parameters of the learned regressor and found that they were all negative: −0.6 for CP, −0.5998 for IPR, −0.8992 for R, −0.6 for MD, −0.8 for WP, and −0.1 for the bias. These results are consistent with our intuition that each of the enumerated error classes has a negative impact on thesis clarity score. In particular, each has a demonstrable negative impact, costing essays an average of more than 0.59 points when it occurs. Moreover, this set of errors accounts for a large majority of all errors impacting thesis clarity because unenumerated errors cost essays an average of only one-tenth of one point on the four-point thesis clarity scale. 4 Error Classification In this section, we describe in detail our system for identifying thesis clarity errors. 4.1 Model Training and Application We recast the problem of identifying which thesis clarity errors apply to an essay as a multi-label classification problem, wherein each essay may be assigned zero or more of the five pre-defined error types. To solve this problem, we train five binary classifiers, one for each error type, using a one-versus-all scheme. So in the binary classification problem for identifying error ei, we create one training instance from each essay in the training set, labeling the instance as positive if the essay has ei as one of its labels, and negative otherwise. Each instance is represented by seven types of features, including two types of baseline features (Section 4.2) and five types of features we introduce for error identification (Section 4.3). After creating training instances for error ei, we train a binary classifier, bi, for identifying which test essays contain error ei. We use SVMlight for classifier training with the regularization parameter, C, set to ci. To improve classifier performance, we perform feature selection. While we employ seven types of features (see Sections 4.2 and 4.3), only the word n-gram features are subject to feature selection.2 Specifically, we employ 2We do not apply feature selection to the remaining fea262 the top ni n-gram features as selected according to information gain computed over the training data (see Yang and Pedersen (1997) for details). Finally, since each classifier assigns a real value to each test essay presented to it indicating its confidence that the essay should be assigned error ei, we employ a classification threshold ti to decide how high this real value must be in order for our system to conclude that an essay contains error ei. Using held-out validation data, we jointly tune the three parameters in the previous paragraph, ci, ni, and ti, to optimize the F-score achieved by bi for error ei.3 However, an exact solution to this optimization problem is computationally expensive. Consequently, we find a local maximum by employing the simulated annealing algorithm (Kirkpatrick et al., 1983), altering one parameter at a time to optimize F-score by holding the remaining parameters fixed. After training the classifiers, we use them to classify the test set essays. The test instances are created in the same way as the training instances. 4.2 Baseline Features Our Baseline system for error classification employs two types of features. First, since labeling essays with thesis clarity errors can be viewed as a text categorization task, we employ lemmatized word unigram, bigram, and trigram features that occur in the essay that have not been removed by the feature selection parameter ni. Because the essays vary greatly in length, we normalize each essay’s set of word features to unit length. The second type of baseline features is based on random indexing (Kanerva et al., 2000). Random indexing is “an efficient, scalable and incremental alternative” (Sahlgren, 2005) to Latent Semantic Indexing (Deerwester et al., 1990; Landauer ture types since each of them includes only a small number of overall features that are expected to be useful. 3For parameter tuning, we employ the following values. ci may be assigned any of the values 102, 103, 104, 105, or 106. ni may be assigned any of the values 3000, 4000, 5000, or ALL, where ALL means all features are used. For ti, we split the range of classification values bi returns for the test set into tenths. ti may take the values 0.0, 0.1, 0.2, . . ., 1.0, and X, where 0.0 classifies all instances as negative, 0.1 classifies only instances bi assigned values in the top tenth of the range as positive, and so on, and X is the default threshold, labeling essays as positive instances of ei only if bi returns for them a value greater than 0. It was necessary to assign ti in this way because the range of values classifiers return varies greatly depending on which error type we are classifying and which other parameters we use. This method gives us reasonably fine-grained thresholds without having to try an unreasonably large number of values for ti. and Dutnais, 1997) which allows us to automatically generate a semantic similarity measure between any two words. We train our random indexing model on over 30 million words of the English Gigaword corpus (Parker et al., 2009) using the S-Space package (Jurgens and Stevens, 2010). We expect that features based on random indexing may be particularly useful for the Incomplete Prompt Response and Relevance to Prompt errors because they may help us find text related to the prompt even if some of its components have been rephrased (e.g., an essay may talk about “jail” rather than “prison”, which is mentioned in one of the prompts). For each essay, we therefore generate four random indexing features, one encoding the entire essay’s similarity to the prompt, another encoding the essay’s highest individual sentence’s similarity to the prompt, a third encoding the highest entire essay similarity to one of the prompt sentences, and finally one encoding the highest individual sentence similarity to an individual prompt sentence. Since random indexing does not provide a straightforward way to measure similarity between groups of words such as sentences or essays, we use Higgins and Burstein’s (2007) method to generate these features. 4.3 Novel Features Next, we introduce five types of novel features. Spelling. One problem we note when examining the information gain top-ranked features for the Confusing Phrasing error is that there are very few common confusing phrases that can contribute to this error. Errors of this type tend to be unique, and hence are not very useful for error classification (because we are not likely to see the same error in the training and test sets). We notice, however, that there are a few misspelled words at the top of the list. This makes sense because a thesis sentence containing excessive misspellings may be less clear to the reader. Even the most common spelling errors, however, tend to be rare. Furthermore, we ask our annotators to only annotate an error if it makes the thesis less clear. The mere presence of an awkward phrase or misspelling is not enough to justify the Confusing Phrasing label. Hence, we introduce a misspelling feature whose value is the number of spelling errors in an essay’s most-misspelled sentence.4 4We employ SCOWL (http://wordlist. sourceforge.net/) as our dictionary, assuming that a 263 Keywords. Improving the prediction of majority classes can greatly enhance our system’s overall performance. Hence, since we have introduced the misspelling feature to enhance our system’s performance on one of the more frequently occurring errors (Confusing Phrasing), it makes sense to introduce another type of feature to improve performance on the other two most frequent errors, Incomplete Prompt Response and Relevance to Prompt. For this reason, we introduce keyword features. To use this feature, we first examine each of the 13 essay prompts, splitting it into its component pieces. For our purposes, a component of a prompt is a prompt substring such that, if an essay does not address it, it may be assigned the Incomplete Prompt Response label. Then, for each component, we manually select the most important (primary) and second most important (secondary) words that it would be good for a writer to use to address the component. To give an example, the lemmatized version of the third component of the second essay in Table 1 is “it should rehabilitate they”. For this component we selected “rehabilitate” as a primary keyword and “society” as a secondary keyword. To compute one of our keyword features, we compute the random indexing similarity between the essay and each group of primary keywords taken from components of the essay’s prompt and assign the feature the lowest of these values. If this feature has a low value, that suggests that the essay may have an Incomplete Prompt Response error because the essay probably did not respond to the part of the prompt from which this value came. To compute another of the keyword features, we count the numbers of combined primary and secondary keywords the essay contains from each component of its prompt, and divide each number by the total number of primary and secondary features for that component. If the greatest of these fractions has a low value, that indicates the essay’s thesis might not be very Relevant to the Prompt.5 Aggregated word n-gram features. Other ways we could measure our system’s performance (such as macro F-score) would consider our system’s performance on the less frequent errors no less important than its performance on the word that does not appear in the dictionary is misspelled. 5Space limitations preclude a complete listing of the keyword features. See our website at http://www.hlt. utdallas.edu/˜persingq/ICLE/ for the complete list. most frequent errors. For this reason, it now makes sense for us to introduce a feature tailored to help our system do better at identifying the least-frequent error types, Missing Details and Writer Position, each of which occurs in fewer than 50 essays. To help with identification of these error classes, we introduce aggregated word n-gram features. While we mention in the previous section one of the reasons regular word n-gram features can be expected to help with these error classes, one of the problems with regular word n-gram features is that it is fairly infrequent for the exact same useful phrase to occur too frequently. Additionally, since there are numerous word n-grams, some infrequent ones may just by chance only occur in positive training set instances, causing the learner to think they indicate the positive class when they do not. To address these problems, for each of the five error classes ei, we construct two Aggregated word features Aw+i and Aw−i. For each essay, Aw+i counts the number of word n-grams we believe indicate that an essay is a positive example of ei, and Aw−i counts the number of word n-grams we believe indicate an essay is not an example of ei. Aw+ n-grams for the Missing Details error tend to include phrases like “there is something” or “this statement”, while Aw−ngrams are often words taken directly from an essay’s prompt. N-grams used for Writer Position’s Aw+ tend to suggest the writer is distancing herself from whatever statement is being made such as “every person”, but n-grams for this error’s Aw−feature are difficult to find. Since Aw+i and Aw−i are so error specific, they are only included in an essay’s feature representation when it is presented to learner bi. So while aggregated word n-grams introduce ten new features, each learner bi only sees two of these (Aw+i and Aw−i). We construct the lists of word n-grams that are aggregated for use as the Aw+ and Aw−feature values in the following way. For each error class ei, we sort the list of all features occurring at least ten times in the training set by information gain. A human annotator then manually inspects the top thousand features in each of the five lists and sorts each list’s features into three categories. The first category for ei’s list consists of features that indicate an essay may be a positive instance. Each word n-gram from this list that occurs in an essay increases the essay’s Aw+i value by one. 264 Similarly, any word n-gram sorted into the second category, which consists of features the annotator thinks indicate a negative instance of ei, increases the essay’s Aw−value by one. The third category just contains all the features the annotator did not believe were useful enough to either class, and we make no further use of those features. For most error types, only about 12% of the top 1000 features get sorted into one of the first two categories. POS n-grams. We might further improve our system’s performance on the Missing Details error type by introducing a feature that aggregates part-of-speech (POS) tag n-grams in the same way that the Aw features aggregate word n-gram features. For this reason, we include POS tag 1, 2, 3, and 4-grams in the set of features we sort in the previous paragraph. For each error ei, we select POS tag n-grams from the top thousand features of the information gain sorted list to count toward the Ap+i and Ap−i aggregation features. We believe this kind of feature may help improve performance on Missing Details because the list of features aggregated to generate the Ap+i feature’s value includes POS n-gram features like CC “ NN ” (scare quotes). This feature type may also help with Confusing Phrasing because the list of POS tag n-grams our annotator generated for its Ap+i contains useful features like DT NNS VBZ VBN (e.g., “these signals has been”), which captures noun-verb disagreement. Semantic roles. Our last aggregated feature is generated using FrameNet-style semantic role labels obtained using SEMAFOR (Das et al., 2010). For each sentence in our data set, SEMAFOR identifies each semantic frame occurring in the sentence as well as each frame element that participates in it. For example, a semantic frame may describe an event that occurs in a sentence, and the event’s frame elements may be the people or objects that participate in the event. For a more concrete example, consider the sentence “They said they do not believe that the prison system is outdated”. This sentence contains a Statement frame because a statement is made in it. One of the frame elements participating in the frame is the Speaker “they”. From this frame, we would extract a feature pairing the frame together with its frame element to get the feature “StatementSpeaker-they”. This feature indicates that the essay it occurs in might be a positive instance of the Writer Position error since it tells us the writer is attributing some statement being made to someone else. Hence, this feature along with several others like “Awareness-Cognizer-we all” are useful when constructing the lists of frame features for Writer Position’s aggregated frame features Af+i and Af−i. Like every other aggregated feature, Af+i and Af−i are generated for every error ei. 5 Score Prediction Because essays containing thesis clarity errors tend to have lower thesis clarity scores than essays with fewer errors, we believe that thesis clarity scores can be predicted for essays by utilizing the same features we use for identifying thesis clarity errors. Because our score prediction system uses the same feature types we use for thesis error identification, each essay’s vector space representation remains unchanged. Only its label changes to one of the values in Table 2 in order to reflect its thesis clarity score. To make use of the fact that some pairs of scores are more similar than others (e.g., an essay with a score of 3.5 is more similar to an essay with a score of 4.0 than it is to one with a score of 1.0), we cast thesis clarity score prediction as a regression rather than classification task. Treating thesis clarity score prediction as a regression problem removes our need for a classification threshold parameter like the one we use in the error identification problem, but if we use SVMlight’s regression option, it does not remove the need for tuning a regularization parameter, C, or a feature selection parameter, n.6 We jointly tune these two parameters to optimize performance on held-out validation data by performing an exhaustive search in the parameter space.7 After we select the features, construct the essay instances, train a regressor on training set essays, and tune parameters on validation set essays, we can use the regressor to obtain thesis clarity scores on test set essays. 6Before tuning the feature selection parameter, we have to sort the list of n-gram features occurring the training set. To enable the use of information gain as the sorting criterion, we treat each distinct score as its own class. 7The absence of the classification threshold parameter and the fact that we do not need to train multiple learners, one for each score, make it feasible for us to do two things. First, we explore a wider range of values for the two parameters: we allow C to take any value from 100, 101, 102, 103, 104, 105, 106, or 107, and we allow n to take any value from 1000, 2000, 3000, 4000, 5000, or ALL. Second, we exhaustively explore the space defined by these parameters in order to obtain an exact solution to the parameter optimization problem. 265 6 Evaluation In this section, we evaluate our systems for error identification and scoring. All the results we report are obtained via five-fold cross-validation experiments. In each experiment, we use 3/5 of our labeled essays for model training, another 1/5 for parameter tuning, and the final 1/5 for testing. 6.1 Error Identification Evaluation metrics. To evaluate our thesis clarity error type identification system, we compute precision, recall, micro F-score, and macro Fscore, which are calculated as follows. Let tpi be the number of test essays correctly labeled as positive by error ei’s binary classifier bi; pi be the total number of test essays labeled as positive by bi; and gi be the total number of test essays that belong to ei according to the gold standard. Then, the precision (Pi), recall (Ri), and F-score (Fi) for bi and the macro F-score (ˆF) of the combined system for one test fold are calculated by Pi = tpi pi , Ri = tpi gi , Fi = 2PiRi Pi + Ri , ˆF = P i Fi 5 . However, the macro F-score calculation can be seen as giving too much weight to the less frequent errors. To avoid this problem, we also calculate for each system the micro precision, recall, and Fscore (P, R, and F), where P = P i tpi P i pi , R = P i tpi P i gi , F = 2PR P + R. Since we perform five-fold cross-validation, each value we report for each of these measures is an average over its values for the five folds.8 Results and discussion. Results on error identification, expressed in terms of precision, recall, micro F-score, and macro F-score are shown in the first four columns of Table 6. Our Baseline system, which only uses word n-gram and random indexing features, seems to perform uniformly poorly across both micro and macro F-scores (F and ˆF; see row 1). The per-class results9 show that, since micro F-score places more weight on the correct identification of the most frequent errors, the system’s micro F-score (31.1%) is fairly close to the average of the scores obtained on the three most frequent error classes, CP, IPR, and R, 8This averaging explains why the formula for F does not exactly hold in the Table 6 results. 9Per-class results are not shown due to space limitations. Error Identification Scoring System P R F ˆF S1 S2 S3 1 B 24.8 44.7 31.1 24.0 .658 .517 .403 2 Bm 24.2 44.2 31.2 25.3 .654 .515 .402 3 Bmk 29.2 44.2 34.9 26.7 .663 .490 .369 4 Bmkw 28.5 49.6 35.5 31.4 .651 .484 .374 5 Bmkwp 34.2 49.6 40.4 34.6 .671 .483 .377 6 Bmkwpf 33.6 54.4 41.4 37.3 .672 .486 .382 Table 6: Five-fold cross-validation results for thesis clarity error identification and scoring. and remains unaffected by very low F-scores on the two remaining infrequent classes.10 When we add the misspelling feature to the baseline, resulting in the system called Bm (row 2), the micro F-score sees a very small, insignificant improvement.11 What is pleasantly surprising, however, is that, even though the misspelling features were developed for the Confusing Phrasing error type, they actually have more of a positive impact on Missing Details and Writer Position, bumping their individual error F-scores up by about 5 and 3 percent respectively. This suggests that spelling difficulties may be correlated with these other essay-writing difficulties, despite their apparent unrelatedness. This effect is strong enough to generate the small, though insignificant, gain in macro F-score shown in the table. When we add keyword features to the system, micro F-score increases significantly by 3.7 points (row 3). The micro per-class results reveal that, as intended, keyword features improve Incomplete Prompt Response and Relevance to Prompt’s Fscores reveals that they do by 6.4 and 9.2 percentage points respectively. The macro F-scores reveal this too, though the macro F-score gains are 3.2 points and 11.5 points respectively. The macro Fscore of the overall system would likely have improved more than shown in the table if the addition of keyword features did not simultaneously reduce Missing Details’s score by several points. While we hoped that adding aggregated word n-gram features to the system (row 4) would be able to improve performance on Confusing Phrasing due to the presence of phrases such as “in university be” in the error’s Aw+i list, there turned out to be few such common phrases in the data set, 10Since parameters for optimizing micro F-score and macro F-score are selected independently, the per-class Fscores associated with micro F-score are different than those used for calculating macro F-score. Hence, when we discuss per-class changes influencing micro F-score, we refer to the former set, and otherwise we refer to the latter set. 11All significance tests are paired t-tests, with p < 0.05. 266 so performance on this class remains mostly unchanged. This feature type does, however, result in major improvements to micro and macro performance on Missing Details and Writer Position, the other two classes this feature was designed to help. Indeed, the micro F-score versions of Missing Details and Writer Position improve by 15.3 and 10.8 percentage points respectively. Since these are minority classes, however, the large improvements result in only a small, insignificant improvement in the overall system’s micro F-score. The macro F-score results for these classes, however, improve by 6.5% and 17.6% respectively, giving us a nearly 5-point, statistically significant bump in macro Fscore after we add this feature. Confusing Phrasing has up to now stubbornly resisted any improvement, even when we added features explicitly designed to help our system do better on this error type. When we add aggregated part of speech n-gram features on top of the previous system, that changes dramatically. Adding these features makes both our system’s F-scores on Confusing Phrasing shoot up almost 8%, resulting in a significant, nearly 4.9% improvement in overall micro F-score and a more modest but insignificant 3.2% improvement in macro F-score (row 5). The micro F-score improvement can also be partly attributed to a four point improvement in Incomplete Prompt Response’s micro Fscore. The 13.7% macro F-score improvement of the Missing Details error plays a larger role in the overall system’s macro F-score improvement than Confusing Phrasing’s improvement, however. The improvement we see in micro F-score when we add aggregated frame features (row 6) can be attributed almost solely to improvements in classification of the minority classes. This is surprising because, as we mentioned before, minority classes tend to have a much smaller impact on overall micro F-score. Furthermore, the overall micro F-score improvement occurrs despite declines in the performances on two of the majority class errors. Missing Details and Writer Position’s micro F-score performances increase by 19.1% and 13.4%. The latter is surprising only because of the magnitude of its improvement, as this feature type was explicitly intended to improve its performance. We did not expect this aggregated feature type to be especially useful for Missing Details error identification because very few of these types of features occur in its Af+i list, and there are none in its Af−i list. The few that are in the former list, however, occur fairly often and look like fairly good indicators of this error (both the examples “Event-Event-it” and “Categorization-Itemthat” occur in the positive list, and both do seem vague, indicating more details are to be desired). Overall, this system improves our baseline’s macro F-score performance significantly by 13.3% and its micro F-score performance significantly by 10.3%. As we progressed, adding each new feature type to the baseline system, there was no definite and consistent pattern to how the precisions and recalls changed in order to produce the universal increases in the F-scores that we observed for each new system. Both just tended to jerkily progress upward as new feature types were added. This confirms our intuition about these features – namely that they do not all uniformly improve our performance in the same way. Some aim to improve precision by telling us when essays are less likely to be positive instances of an error class, such as any of the Aw−i, Ap−i, or Af−i features, and others aim to tell us when an essay is more likely to be a positive instance of an error. 6.2 Scoring Scoring metrics. We design three evaluation metrics to measure the error of our thesis clarity scoring system. The S1 metric measures the frequency at which a system predicts the wrong score out of the seven possible scores. Hence, a system that predicts the right score only 25% of the time would receive an S1 score of 0.75. The S2 metric measures the average distance between the system’s score and the actual score. This metric reflects the idea that a system that estimates scores close to the annotator-assigned scores should be preferred over a system whose estimations are further off, even if both systems estimate the correct score at the same frequency. Finally, the S3 metric measures the average square of the distance between a system’s thesis clarity score estimations and the annotatorassigned scores. The intuition behind this metric is that not only should we prefer a system whose estimations are close to the annotator scores, but we should also prefer one whose estimations are not too frequently very far away from the annotator scores. These three scores are given by: 1 N X Aj̸=E′ j 1, 1 N N X i=1 |Aj −Ej|, 1 N N X i=1 (Aj −Ej)2 267 where Aj, Ej, and E′ j are the annotator assigned, system estimated, and rounded system estimated scores12 respectively for essay j, and N is the number of essays. Results and discussion. Results on scoring are shown in the last three columns of Table 6. We see that the thesis clarity score predicting variation of the Baseline system, which employs as features only word n-grams and random indexing features, predicts the wrong score 65.8% of the time. Its predicted score is on average 0.517 points off of the actual score, and the average squared distance between the predicted and actual scores is 0.403. We observed earlier that a high number of misspellings may be positively correlated with one or more unrelated errors. Adding the misspelling feature to the scoring systems, however, only yields minor, insignificant improvements to their performances under the three scoring metrics. While adding keyword features on top of this system does not improve the frequency with which the right score is predicted, it both tends to move the predictions closer to the actual thesis clarity score value (as evidenced by the significant improvement in S2) and ensures that predicted scores will not too often stray too far from the actual value (as evidenced by the significant improvement in S3). Overall, the scoring model employing the Bmk feature set performs significantly better than the Baseline scoring model with respect to two out of three scoring metrics. The only remaining feature type whose addition yields a significant performance improvement is the aggregated word feature type, which improves system Bmk’s S2 score significantly while having an insignificant impact on the other S metrics. Neither of the remaining aggregative features yields any significant improvements in performance. This is a surprising finding since, up until we introduced aggregated part-of-speech tag ngram features into our regressor, each additional feature that helped with error classification made at least a small but positive contribution to at least two out of the three S scores. These aggregative features, which proved to be very powerful when assigning error labels, are not as useful for thesis 12Since our regressor assigns each essay a real value rather than an actual valid thesis clarity score, it would be difficult to obtain a reasonable S1 score without rounding the system estimated score to one of the possible values. For that reason, we round the estimated score to the nearest of the seven scores the human annotators were permitted to assign (1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0) only when calculating S1. S1 (Bmkw) S2 (Bmkwp) S3 (Bmk) Gold .25 .50 .75 .25 .50 .75 .25 .50 .75 1.0 3.5 3.5 3.5 3.0 3.2 3.5 3.1 3.2 3.3 1.5 2.5 3.0 3.0 2.8 3.1 3.2 2.6 3.0 3.2 2.0 3.0 3.0 3.5 3.0 3.2 3.5 3.0 3.1 3.4 2.5 3.0 3.5 3.5 3.0 3.3 3.6 3.0 3.3 3.5 3.0 3.0 3.5 3.5 3.1 3.4 3.5 3.1 3.3 3.5 3.5 3.5 3.5 4.0 3.2 3.4 3.6 3.2 3.4 3.5 4.0 3.5 3.5 4.0 3.4 3.6 3.8 3.4 3.5 3.7 Table 7: Regressor scores for top three systems. clarity scoring. To more closely examine the behavior of the best scoring systems, in Table 7 we chart the distributions of scores they predict for each gold standard score. As an example of how to read this table, consider the number 2.8 appearing in row 1.5 in the .25 column of the S2 (Bmkwp) region. This means that 25% of the time, when system Bmkwp (which obtains the best S2 score) is presented with a test essay having a gold standard score of 1.5, it predicts that the essay has a score less than or equal to 2.8 for the S2 metric. From this table, we see that each of the best systems has a strong bias toward predicting more frequent scores as there are no numbers less than 3.0 in the 50% columns, and about 82.8% of all essays have gold standard scores of 3.0 or above. Nevertheless, no system relies entirely on bias, as evidenced by the fact that each column in the table has a tendency for its scores to ascend as the gold standard score increases, implying that the systems have some success at predicting lower scores for essays with lower gold standard scores. Finally, we note that the difference in error weighting between the S2 and S3 scoring metrics appears to be having its desired effect, as there is a strong tendency for each entry in the S3 subtable to be less than or equal to its corresponding entry in the S2 subtable due to the greater penalty the S3 metric imposes for predictions that are very far away from the gold standard scores. 7 Conclusion We examined the problem of modeling thesis clarity errors and scoring in student essays. In addition to developing these models, we proposed novel features for use in our thesis clarity error model and employed these features, each of which was explicitly designed for one or more of the error types, to train our scoring model. We make our thesis clarity annotations publicly available in order to stimulate further research on this task. 268 Acknowledgments We thank the three anonymous reviewers for their detailed and insightful comments on an earlier draft of the paper. This work was supported in part by NSF Grants IIS-1147644 and IIS-1219142. Any opinions, findings, conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views or official policies, either expressed or implied, of NSF. References Yigal Attali and Jill Burstein. 2006. Automated essay scoring with E-rater v.2.0. Journal of Technology, Learning, and Assessment, 4(3). Jean Carletta. 1996. Assessing agreement on classification tasks: The Kappa statistic. Computational Linguistics, 22(2):249–254. Dipanjan Das, Nathan Schneider, Desai Chen, and Noah A. Smith. 2010. Probabilistic frame-semantic parsing. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 948–956. Scott Deerwester, Susan T. Dumais, George W. Furnas, Thomas K. Landauer, and Richard Harshman. 1990. Indexing by Latent Semantic Analysis. Journal of the American Society for Information Science, 41(6):391–407. Sylviane Granger, Estelle Dagneaux, Fanny Meunier, and Magali Paquot. 2009. International Corpus of Learner English (Version 2). Presses universitaires de Louvain. Derrick Higgins and Jill Burstein. 2007. Sentence similarity measures for essay coherence. In Proceedings of the 7th International Workshop on Computational Semantics. Derrick Higgins, Jill Burstein, Daniel Marcu, and Claudia Gentile. 2004. Evaluating multiple aspects of coherence in student essays. In Human Language Technologies: The 2004 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 185–192. Thorsten Joachims. 1999. Making large-scale SVM learning practical. In B. Sch¨olkopf, C. Burges, and A. Smola, editors, Advances in Kernel Methods Support Vector Learning, Chapter 11, pages 169– 184. MIT Press, Cambridge, MA. David Jurgens and Keith Stevens. 2010. The S-Space package: An open source package for word space models. In Proceedings of the ACL 2010 System Demonstrations, pages 30–35. Pentti Kanerva, Jan Kristoferson, and Anders Holst. 2000. Random indexing of text samples for Latent Semantic Analysis. In Proceedings the 22nd Annual Conference of the Cognitive Science Society, pages 103–106. Scott Kirkpatrick, C. D. Gelatt, and Mario P. Vecchi. 1983. Optimization by simulated annealing. Science, 220(4598):671–680. Thomas K Landauer and Susan T. Dumais. 1997. A solution to Plato’s problem: The Latent Semantic Analysis theory of acquisition, induction, and representation of knowledge. Psychological review, pages 211–240. Thomas K. Landauer, Darrell Laham, and Peter W. Foltz. 2003. Automated scoring and annotation of essays with the Intelligent Essay AssessorTM. In Automated Essay Scoring: A Cross-Disciplinary Perspective, pages 87–112. Lawrence Erlbaum Associates, Inc., Mahwah, NJ. Eleni Miltsakaki and Karen Kukich. 2004. Evaluation of text coherence for electronic essay scoring systems. Natural Language Engineering, 10(1):25–55. Robert Parker, David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2009. English Gigaword Fourth Edition. Linguistic Data Consortium, Philadelphia. Isaac Persing, Alan Davis, and Vincent Ng. 2010. Modeling organization in student essays. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 229– 239. Magnus Sahlgren. 2005. An introduction to random indexing. In Proceedings of the Methods and Applications of Semantic Indexing Workshop at the 7th International Conference on Terminology and Knowledge Engineering. Mark D. Shermis and Jill C. Burstein. 2003. Automated Essay Scoring: A Cross-Disciplinary Perspective. Lawrence Erlbaum Associates, Inc., Mahwah, NJ. Mark D. Shermis, Jill Burstein, Derrick Higgins, and Klaus Zechner. 2010. Automated essay scoring: Writing assessment and instruction. In International Encyclopedia of Education (3rd edition). Elsevier, Oxford, UK. Yiming Yang and Jan O. Pedersen. 1997. A comparative study on feature selection in text categorization. In Proceedings of the 14th International Conference on Machine Learning, pages 412–420. 269
2013
26
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 270–280, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Translating Italian connectives into Italian Sign Language Camillo Lugaresi University of Illinois at Chicago Politecnico di Milano [email protected] Barbara Di Eugenio Department of Computer Science University of Illinois at Chicago [email protected] Abstract We present a corpus analysis of how Italian connectives are translated into LIS, the Italian Sign Language. Since corpus resources are scarce, we propose an alignment method between the syntactic trees of the Italian sentence and of its LIS translation. This method, and clustering applied to its outputs, highlight the different ways a connective can be rendered in LIS: with a corresponding sign, by affecting the location or shape of other signs, or being omitted altogether. We translate these findings into a computational model that will be integrated into the pipeline of an existing Italian-LIS rendering system. Initial experiments to learn the four possible translations with Decision Trees give promising results. 1 Introduction Automatic translation between a spoken language and a signed language gives rise to some of the same difficulties as translation between spoken languages, but adds unique challenges of its own. Contrary to what one might expect, sign languages are not artificial languages, but natural languages that spontaneously arose within deaf communities; although they are typically named after the region where they are used, they are not derived from the local spoken language and tend to bear no similarity to it. Therefore, translation from any spoken language into the signed language of that specific region is at least as complicated as between any pairs of unrelated languages. The problem of automatic translation is compounded by the fact that the amount of computational resources to draw on is much smaller than is typical for major spoken languages. Moreover, the fact that sign languages employ a different transmission modality (gestures and expressions instead of sounds) means that existing writing systems are not easily adaptable to them. The resulting lack of a shared written form does nothing to improve the availability of sign language corpora; bilingual corpora, which are of particular importance to a translation system, are especially rare. In fact, various projects around the world are trying to ameliorate this sad state of affairs for specific Sign Languages (Lu and Huenerfauth, 2010; Braffort et al., 2010; Morrissey et al., 2010). In this paper, we describe the work we performed as concerns the translation of connectives from the Italian language into LIS, the Italian Sign Language (Lingua Italiana dei Segni). Because the communities of signers in Italy are relatively small and fragmented, and the language has a relatively short history, there is far less existing research and material to draw on than for, say, ASL (American Sign Language) or BSL (British Sign Language). Our work was undertaken within the purview of the ATLAS project (Bertoldi et al., 2010; Lombardo et al., 2010; Lombardo et al., 2011; Prinetto et al., 2011; Mazzei, 2012; Ahmad et al., 2012), which developed a full pipeline for translating Italian into LIS. ATLAS is part of a recent crop of projects devoted to developing automatic translation from language L spoken in geographic area G into the sign language spoken in G (Dreuw et al., 2010; L´opez-Lude˜na et al., 2011; Almohimeed et al., 2011; Lu and Huenerfauth, 2012). Input is taken in the form of written Italian text, parsed, and converted into a semantic representation of its contents; from this semantic representation, LIS output is produced, using a custom serialization format called AEWLIS (which we will describe later). This representation is then augmented with space positioning information, and fed into a final renderer component that performs the signs using a virtual actor. ATLAS focused on a limited domain for which a bilingual Italian/LIS cor270 pus was available: weather forecasts, for which the Italian public broadcasting corporation (RAI) had long been producing special broadcasts with a signed translation. This yielded a corpus of 376 LIS sentences with corresponding Italian text: this corpus, converted into AEWLIS format, was the main data source for the project. Still, it is a very small corpus, hence the main project shied away from statistical NLP techniques, relying instead on rule-based approaches developed with the help of a native Italian/LIS bilingual speaker; a similar approach is taken e.g. in (Almohimeed et al., 2011) for Arabic. 1.1 Why connectives? The main semantic-bearing elements of an Italian sentence, such as nouns or verbs, typically have a LIS sign as their direct translation. We focus on a different class of elements, comprising conjunctions and prepositions, but also some adverbs and prepositional phrases; collectively, we refer to them as connectives. Since they are mainly structural elements, they are more heavily affected by differences in the syntax and grammar of Italian and LIS (and, presumably, in those of any spoken language and the “corresponding” SL). Specifically, as we will see later, some connectives are translated with a sign, some connectives are dropped, whereas others affect the positioning of other signs, or just their syntactic proximity. It should be noted that our usage of the term “connectives” is somewhat unorthodox. For example, while prepositions can be seen as connectives (Ferrari, 2008), only a few adverbs can work as connectives. From the Italian Treebank, we extracted all words or phrases that belonged to a syntactic category that can be a connective (conjunction, preposition, adverb or prepositional phrase). We then found that we could better serve the needs of ATLAS by running our analysis on the entire resulting list, without filtering it by eliminating the entries that are not actual connectives. In fact, semantic differences re-emerge through our analysis: e.g., the temporal adverbs “domani” and “dopodomani” are nearly always preserved, as they do carry key information (especially for weather forecasting) and are not structural elements. In performing our analysis, we pursued a different path from the main project, relying entirely on the bilingual corpus. Although the use of statistical techniques was hampered by the small size of the corpus, at the same time it presented an interesting opportunity to attack the problem from a different angle. In this paper we describe how we uncovered the translation distributions of the different connectives from Italian to LIS via tree alignment. 2 Corpus Analysis The corpus consists of 40 weather forecasts in Italian and LIS. The Italian spoken utterance and LIS signing were transcribed from the original videos – one example of an Italian sentence and its LIS equivalent are shown in Figure 1. An English word-by-word translation is provided for the Italian sentence, followed by a more fluent translation; the LIS glosses are literally translated. Note that as concerns LIS, this simply includes the gloss for the corresponding sign. The 40 weather forecast comprise 374 Italian sentences and 376 LIS sentences, stored in 372 AEWLIS files. In most cases, a file corresponds to one Italian sentence and one corresponding LIS sentences; however, there are 4 files where an Italian sentence is split into two LIS sentences, and 2 files where two Italian sentences are merged into one LIS sentence. AEWLIS is an XML-based format (see Figure 2) which represents each sign in the LIS sentence as an element, in the order in which they occur in the sentence. A sign’s lemma is represented by the Italian word with the same meaning, always written in uppercase, and with its part of speech (tipoAG in Figure 2); there are also IDs referencing the lemma’s position in a few dictionaries, but these are not always present. The AEWLIS file also stores several additional attributes, such as: a parent reference that represents the syntax of the LIS sentence; the syntactic role “played” by the sign in the LIS sentence; the facial expression accompanying the gesture; the location in the signing space (which may be an absolute location or a reference to a previous sign’s: compare HR (High Right) and atLemma in Figure 2). These attributes are stored as elements grouped by type, and reference the corresponding sign element by its ordinal position in the sentence. The additional attributes are not always available: morphological variations are annotated only when they differ from an assumed standard form of the sign, while the syntactic structure was annotated for only 89 sentences. 271 (1) (Ita.) Anche Also sulla on Sardegna Sardinia qualche a few annuvolamento cloud covers pomeridiano, afternoon[adj], possibilit`a chance di of qualche a few breve brief scroscio downpour di of pioggia, rain, ma but tendenza trend poi then a towards schiarite. sunny spells. “Also on Sardinia skies will become overcast in the afternoon, chance of a few brief downpours of rain, but then a trend towards a mix of sun and clouds”. (2) (LIS) POMERIGGIO Afternoon SARDEGNA Sardinia AREA area NUVOLA cloud PURE also ACQUAZZONE downpour POTERE can[modal] MA but POI then NUVOLA cloud DIMINUIRE decrease Figure 1: Italian sentence and its LIS translation <Lemmi> <NuovoLemma lemma="POMERIGGIO" tipoAG="NOME" ... endTime="2.247" idSign=""/> <NuovoLemma lemma="sardegna" tipoAG="NOME_PROPRIO" ... endTime="2.795" idSign="2687"/> <NuovoLemma lemma="area" tipoAG="NOME" ... endTime="4.08" idSign="2642"/> <NuovoLemma lemma="nuvola" tipoAG="NOME" ... endTime="5.486" idSign="2667"/> <NuovoLemma lemma="pure" tipoAG="AVVERBIO" ... endTime="6.504" idSign="2681"/> ... </Lemmi><SentenceAttribute> <Parent> <Timestamp time="1" value="ID:3"/> <Timestamp time="2" value="ID:2"/> <Timestamp time="3" value="ID:3"/> <Timestamp time="4" value="root_1"/> <Timestamp time="5" value="ID:3"/> ... </Parent> ... <Sign_Spatial_Location> <Timestamp time="1" value=""/> <Timestamp time="2" value="HR"/> <Timestamp time="3" value="HL"/> <Timestamp time="4" value="atLemma(ID:2, Distant)"/> <Timestamp time="5" value=""/> ... </Sign_Spatial_Location> ... <Facial> <Timestamp time="1" value=""/> <Timestamp time="2" value="eye brows:raise"/> <Timestamp time="3" value="eye brows:raise"/> <Timestamp time="4" value="eye brows:-lwrd"/> <Timestamp time="5" value=""/> ... </Facial> </SentenceAttribute> Figure 2: Example excerpt from an AEWLIS file 2.1 Distributional statistics for connectives The list of Italian connectives we considered was extracted from the Italian Treebank developed at the Institute for Computational Linguistics in Pisa, Italy (Montemagni et al., 2003) by searching for conjunctions, prepositions and adverbs. This yielded a total of 777 potential connectives. Of those, only 104 occur in our corpus. A simple count of the occurrences of connectives in the Italian and LIS versions of the corpus yields the following results: (a) 78 connectives (2068 occurrences total) only occur in the Italian version, for example ALMENO (at least), CON (with), INFATTI (indeed), PER (for) . (b) 8 connectives (67 occurrences total) only occur in the LIS version, for example CIRCA (about), as in “Here I am”), PURE (also, additionally). (c) 25 connectives (925 occurrences total) occur in both versions. For the third category, we have computed the ratio of the number of occurrences in Italian over the number of occurrences in LIS; the ratios are plotted in logarithmic scale in Figure 3. 0 on the scale corresponds to an ITA/LIS ratio equal to 1; positive numbers indicate that there are more occurrences in ITA, negative numbers that there are more occurrences in LIS. We can recognize three clusters by ratio: (c1) 9 connectives occurring in both languages, but mainly in Italian, for example POCO (a little), PI ´U (more), SE (if), QUINDI (hence). (c2) 13 connectives occurring in both languages with similar frequency, for example SOLO (only), POI (then), O (or), MA (but). (c3) 3 connectives occurring in both languages, but mainly in LIS: MENO (less), ADESSO (now), INVECE (instead). 272 both Page 1 abbastanza (enough) adesso (now) ancora (still, yet) chiaro (clear) domani (tomorrow) dopodomani (day after tomorrow) ecco (here) invece (instead) ma (but) meglio (better) meno (less) o (or) oggi (today) ora (now) ovunque (everywhere) più (more) poco (a little) poi (then) proprio (just, precisely) qui (here) quindi (hence) se (if) sicuro (sure) solo (only) tanto (much) -0.5 0 0.5 1 1.5 Figure 3: Ratio of ITA/LIS occurrences in logarithmic scale. 3 The effect of the Italian connectives on the LIS translation From this basic frequency analysis we can already notice that a large number of connectives only appear in Italian, or have far more occurrences in Italian than in LIS. This is unsurprising, considering that LIS sentences tend to be shorter than Italian sentences in terms of number of signs/words (a fact which probably correlates with the increased energy and time requirements intrinsic into articulating a message using one’s arms rather than one’s tongue). However, our goal is to predict when a connective should be dropped and when it should be preserved. Furthermore, even if the connective does not appear in the LIS sentence as a directly corresponding sign, that does not mean that its presence in the Italian sentence has no effect on the translation. We hypothesize four different possible realizations for a connective in the Italian sentence: • the connective word or phrase may map to a corresponding connective sign; • the connective is not present as a sign, but may affect the morphology of the signs which translate words syntactically adjacent to the connective; • the connective is not present as a sign, but its presence may be reflected by the fact that words connected by it map to signs which are close to each other in the LIS syntax tree; • the connective is dropped altogether. The second hypothesis deserves some explanation. The earliest treatments of LIS assumed that each sign (lemma) could be treated as invariant. Attempts to represent LIS in writing simply replaced each sign with a chosen Italian word (or phrase, if necessary) with the same meaning. Although this is still a useful way of representing the basic lemma, more recent studies have noted that LIS signs can undergo significant morphological variations which are lost under such a scheme. The AEWLIS format, in fact, was designed to preserve them. Of course, morphological variations in LIS are not phonetic, like in a spoken language, but gestural (Volterra, 1987; Romeo, 1991). For example, the location in which a gesture is performed may be varied, or its speed, or the facial expressions that accompany it (Geraci et al., 2008). One particularly interesting axis of morphology is the positioning of the gesture in the signing space in front of the signer. This space is implicitly divided into a grid with a few different positions from left to right and from top to bottom (see HR – High Right, and LH – High Left, in Figure 2). Two or more signs can then be placed in different positions in this virtual space, and by performing other signs in the same positions the signer can express a backreference to the previously established entity at that location. One can even have a movement verb where the starting and ending positions of the gesture are positioned independently to indicate the source and destination of the movement. In other words, these morphological variations can perform a similar function to gender and number agreement in Italian backreferences, but they can also assume roles that in Italian would be performed by prepositions, which are connectives. In fact, as we will see later on, Italian prepositions are never translated as signs, but are often associated with morphological variations on related signs. 3.1 Tree Alignment Two of our four translation hypotheses involve a notion of distance on the syntax tree, and a no273 19_f08_2011-06-08_10_04_10.xml Anche sulla Sardegna qualche annuvolamento pomeridiano, possibilità di qualche breve scroscio di pioggia, ma tendenza poi a schiarite pomeriggio Sardegna area nuvola pure acquazzone potere ma poi nuvola diminuire sulla Anche Sardegna annuvolamento qualche pomeridiano , possibilità , ma di scroscio qualche breve di pioggia tendenza poi a schiarite NUVOLA POMERIGGIO AREA PURE POTERE SARDEGNA ACQUAZZONE MA NUVOLA DIMINUIRE POI Figure 4: Example of integrated syntax trees. tion of signs corresponding to words. Therefore, it is not sufficient to consider the LIS sentence and the Italian sentence separately. Instead, their syntax trees must be reconstructed and aligned. Tree alignment in a variety of forms has been extensively used in machine translation systems (Gildea, 2003; Eisner, 2003; May and Knight, 2007). As far as we know, we are the first to attempt the usage of tree alignment to aid in the translation between a spoken and a sign language, partly because corpora that include synctactic trees for sign language sentences hardly exist. (L´opez-Lude˜na et al., 2011) does use alignment techniques for translation from Spanish to Spanish Sign Language (SSL), but it is limited to alignment between words or phrases in Spanish, and glosses or sequences of glosses in SSL. We have developed a pipeline that takes in input the corpus files, parses the Italian sentence with an existing parser, and retrieves / builds a parse tree for the LIS sentence. The two trees are then aligned by exploiting the word/sign alignment. A sample output is shown in Figure 4. Italian sentence parsing. Since the corpus contains the Italian sentences in plain, unstructured text form, they need to be parsed. We used the DeSR parser, a dependency parser pre-trained on a very large Italian corpus (Attardi et al., 2007; Ciaramita and Attardi, 2011). This parser produced the syntax trees and POS tagging that we used for the Italian part of the corpus. LIS syntax tree. One of the attributes allowed by AEWLIS is “parent”, which points a sign to its parent in the syntax tree, or marks it as a root (see Figure 2). These hand-built syntax trees are available in roughly 1/4 of the AEWLIS files. Because the size of our corpus is already limited, and be274 cause no tools are available to generate LIS syntax trees, for the remaining 3/4 of the corpus we fell back on a simple linear tree where each sign is connected to its predecessor. This solution at least maintains locality in most cases. Word Alignment. Having obtained syntax trees for the two sentences, we then needed to align them. For this purpose we used the Berkeley Word Aligner (BWA) 1 (Denero, 2007), a general tool for aligning sentences in bilingual corpora. BWA takes as input a series of matching sentences in two different languages, trains multiple unsupervised alignment models, and selects the optimal result using a testing set of manual alignments. The output is a list of aligned word indices for each input sentence pair. On our data set, BWA performance is as follows: Precision = 0.736364, Recall = 0.704348, AER = 0.280000. Integration. The result is an integrated syntax tree representation of the Italian and LIS versions of the sentence, with arcs bridging aligned word/sign pairs. Since some connectives consist of multi-word phrases, the word nodes which are part of one are merged into a super-node that inherits all connections to other nodes. Figure 4 shows the end result for the Italian and LIS sentences in Figure 1 (the two sentences are repeated for convenience at the bottom of Figure 4). The rectangular boxes are words in the Italian sentence, while the rounded boxes are signs in the LIS sentence. The Italian tree has its root(s) at the bottom, while the LIS tree has its root(s) at the top. Solid arrows point from children to parent nodes in the syntax tree. Gray-shaded boxes represent connectives (words or signs, as indicated by the border of the box). Bold dashed lines show word alignment. Edges with round heads show relationships where a sign has a location attribute referencing another sign. Arrows with an empty triangular head trace the paths described in the next section. 3.2 Subtree alignment and path processing At this point individual words are aligned, but that is not sufficient. Our hypotheses on the effect of connectives on translation requires us to align a tree fragment surrounding the Italian connective with the corresponding tree fragment on the LIS 1http://code.google.com/p/ berkeleyaligner/ side - where the connective may be missing. In effect, since we have hypothesized that the presence of a connective can affect the translation of the two subtrees that it connects, we would like to be able to align each of those subtrees to its translation. However, given the differences between the two languages, it is not easy to give a clear definition of this mapping - let alone to compute it. Instead, we can take a step back to word-level alignment. We make the observation that, if two words belong to two different subtrees linked by a connective, so that the path between the two words goes through the connective, then the frontier between the LIS counterparts of those two subtrees should also lie along the path between the signs aligned with those two words. If the connective is preserved in translation as a sign, we should expect to find it along that path; if it is not, its effect should still be seen along that path, either in the form of morphological variations to the signs along the path, or in the shortness of the path itself. The first step, then, is to split the Italian syntax tree by removing the connective. This yields one subtree containing the connective’s parent node, if any, and one subtree for each of the connective’s children, if any. The parent subtree typically contains most of the rest of the sentence, so only the direct ancestors of the connective are considered. Then, each pair of words belonging to different subtrees is linked by a path that goes through the connective in the original tree. Of these words, we select the ones that have aligned signs, and then we compute the path between each pair of signs aligned to words belonging to different subtrees. This gives us a set of paths to consider in the LIS syntax tree. For example, let us consider the connective “di” between “possibilit`a” and “scroscio” in Figure 4. • This node connects two subtrees: a child subtree containing “qualche, breve, scroscio, di, pioggia”, and a parent subtree containing the rest of the sentence. • From each subtree, a set of paths is generated: all paths extending from the connective to the leaves of the child subtree (for example “scroscio, qualche” or “scroscio, di, pioggia”), and the path of direct ancestors in the parent tree (“sulla, annuvolamento, possibilit`a”). • Iterate through the cartesian product of each 275 Table 1: Translation candidates for connectives with more than 10 occurrences Connective ITA Occurrences Sign Location Close Missing domani 71 67 (94.37%) 1 (1.41%) 2 (2.82%) 4 (5.63%) dopodomani 15 14 (93.33%) 0 (0.00%) 0 (0.00%) 1 (6.67%) mentre 28 26 (92.86%) 5 (17.86%) 0 (0.00%) 1 (3.57%) o 37 37 (100.00%) 2 (5.41%) 6 (16.22%) 0 (0.00%) per`o 10 9 (90.00%) 1 (10.00%) 1 (10.00%) 1 (10.00%) ancora 72 44 (61.11%) 1 (1.39%) 3 (4.17%) 25 (34.72%) invece 17 9 (52.94%) 1 (5.88%) 2 (11.76%) 6 (35.29%) ma 51 29 (56.86%) 1 (1.96%) 2 (3.92%) 21 (41.18%) poi 22 10 (45.45%) 2 (9.09%) 0 (0.00%) 10 (45.45%) abbastanza 11 4 (36.36%) 1 (9.09%) 0 (0.00%) 6 (54.55%) anche 89 33 (37.08%) 5 (5.62%) 1 (1.12%) 53 (59.55%) ora 17 6 (35.29%) 1 (5.88%) 1 (5.88%) 10 (58.82%) proprio 11 5 (45.45%) 0 (0.00%) 0 (0.00%) 6 (54.55%) quindi 35 9 (25.71%) 1 (2.86%) 0 (0.00%) 25 (71.43%) come 16 0 (0.00%) 1 (6.25%) 1 (6.25%) 14 (87.50%) dove 28 0 (0.00%) 1 (3.57%) 0 (0.00%) 27 (96.43%) generalmente 13 0 (0.00%) 0 (0.00%) 0 (0.00%) 13 (100.00%) per quanto riguarda 14 0 (0.00%) 0 (0.00%) 1 (7.14%) 13 (92.86%) piuttosto 13 0 (0.00%) 0 (0.00%) 0 (0.00%) 13 (100.00%) pi`u 57 0 (0.00%) 3 (5.26%) 2 (3.51%) 52 (91.23%) poco 63 2 (3.17%) 3 (4.76%) 0 (0.00%) 58 (92.06%) sempre 13 1 (7.69%) 0 (0.00%) 0 (0.00%) 12 (92.31%) soprattutto 16 1 (6.25%) 1 (6.25%) 0 (0.00%) 14 (87.50%) a 111 0 (0.00%) 18 (16.22%) 30 (27.03%) 66 (59.46%) con 91 0 (0.00%) 20 (21.98%) 11 (12.09%) 62 (68.13%) da 97 0 (0.00%) 26 (26.80%) 18 (18.56%) 62 (63.92%) di 510 2 (0.39%) 92 (18.04%) 140 (27.45%) 312 (61.18%) e 206 17 (8.25%) 34 (16.50%) 25 (12.14%) 140 (67.96%) in 168 6 (3.57%) 37 (22.02%) 16 (9.52%) 113 (67.26%) per 120 0 (0.00%) 7 (5.83%) 35 (29.17%) 82 (68.33%) su 327 4 (1.22%) 121 (37.00%) 38 (11.62%) 190 (58.10%) verso 18 0 (0.00%) 6 (33.33%) 1 (5.56%) 12 (66.67%) pair of sets (in this case we have only one pair), and consider the full path formed by the two paths connected by the connective node (for instance, “sulla, annuvolamento, possiblit`a, di, scroscio, breve”). • For each of these paths, take the signs aligned to words on different sides of the target connective, and find the shortest path between those signs in the LIS syntax tree; we call this the aligned path. For example, from “possibilit`a” and “scroscio” we find “POTERE, ACQUAZZONE”. If this process generates multiple paths, only the maximal ones are kept. By looking at words within a certain distance of the connective, at their aligned signs, and at the distance between those signs in the aligned path, the program then produces one or more “translation candidates” for each occurrence of a connective: • Sign: if the connective word is aligned to a connective sign in LIS, that is its direct translation; • Location: if morphology variations (currently limited to the “location” attribute, see Figure 2) are present on a sign aligned to an It. word belonging to one of the examined paths, and the word is less than 2 steps away from the connective, that morphological variation in LIS may capture the function of the connective; • Close: if two It. words are connected by a connective, and they map to signs which have a very short path between them (up to 3 nodes, including the two signs), the connective may be reflected simply in this close connection between the translated subtrees in the LIS syntax tree; • Missing: if none of the above hypotheses are possible, we hypothesize that the connective has been simply dropped. Table 1 shows the results of this analysis. It includes only connectives with more than 10 occurrences. For each connective and translation hypothesis, the shading of the cell is proportional to 276 the fraction of occurrences where that hypothesis is possible; this fraction is also given as a percent. Note that Sign, Location and Close candidates are not mutually exclusive: for instance, an occurrence of a connective might be directly aligned with a sign, but at the same time it might fit the criteria for a Location candidate. For this reason, the sum of the percents in the four columns is not necessarily 100. k-means clustering (MacQueen, 1967; Lloyd, 1982) has been applied to the connectives, with the aforementioned fractions as the features. The resulting five clusters are represented by the row groupings in the table. The first cluster contains words which clearly have a corresponding sign in LIS, such as “domani” (tomorrow). “Domani” and “dopodomani” are not actually connectives, while “mentre”, “o” and “per`o” are. It is interesting to note that, while a logician might expect “e” (and) and “o” (or) to be treated similarly, they actually work quite differently in LIS: there is a specific sign for “o”, but there is no sign for “e”. Instead, signs are simply juxtaposed in LIS where “e” would be used in Italian. The words in the second cluster also have a direct sign translation, but they are missing in the LIS translation around half of the time. Several words represent connections with previous statements or situations, such as “ancora” (again), “invece” (instead), “ma” (but). These appear to be often dropped in LIS when they reference a previous sentence, e.g. a sentence-initial “ma”; or when they are redundant in Italian, e.g. “ma” in “ma anche” (“but also”). Therefore, we think can see two phenomena at play here: a stronger principle of economy in LIS, and a reduced number of explicit connections across sentences. The third cluster is similar to the second cluster, but with a higher percent of dropped connectives. This is probably related to the semantics of these five words. “Abbastanza” means “quite, enough”, and in general indicates a medium quantity, not particularly large nor particularly small. It is no surprise that this word is more likely to succumb to principles of economy in language. “Anche” means “also”, and is either translated as “PURE” (also) or dropped. This does not seem to depend on the specific circumstances of its usage; rather, it seems to be largely a stylistic choice by the translator. “Proprio” (“precisely”, “just”) has a corresponding sign “PROPRIO”, but since it does not convey essential information it is a good candidate for dropping. “Quindi”, meaning “therefore”, has its own sign “QUINDI”, but once again the causal relationship it conveys is usually not essential to understanding what the weather will be, and thus it is frequently dropped. The fourth cluster consists of connectives which are largely simply dropped. Some of these are elements that just contribute to the discourse flow in Italian, such as “per quanto riguarda” (“concerning”); in fact, this connective mainly occurs in sentence-initial position in the Italian sentences in our corpus and denotes a change of topic from the previous sentence, corroborating our hypothesis of a reduced use of explicit intersentence connections in LIS. It may seem strange for comparative and intensity markers such as “pi`u” (more) or “poco” (a little) to be so consistently dropped, but it turns out that intensity variations for weather phenomena are often embedded into a specific sign, for example “NUVOLOSIT `A AUMENTARE” (increasing cloud cover). The fifth cluster contains all Italian prepositions (with 10 or more occurrences in the corpus), none of which is translated as a sign (the 6 occurrences for “in”, the 4 for “su” and the 2 for “di” are due to alignment errors). We can conclude that prepositions do not exist in LIS as parts of speech; however, the prepositions in this cluster are often associated with morphological variations in the spatial positioning of related signs, which suggests that the role associated with these prepositions in Italian is performed by these variations in LIS. The conjunction “e” (and) also ends up in this cluster, although it has 8 legitimate sign alignments with “pure” (“too”); the rest are alignment errors. Unsurprisingly, all connectives in this class also have high ratings for the “close” hypothesis. 4 Rule extraction We trained a classifier to help a LIS generator determine how an Italian connective should be translated. Because the translation pipeline we plan to integrate with is rule-based, we chose a Decision Tree as our classifier: this allows rules to be easily extracted from the classification model. In order to identify a single class for each example, we ranked the four possible translation candidates as follows: Sign is the strongest, then Location, then Close, and finally Missing is the 277 child1 align = None ∩word = Per quanto riguarda ∩parent align = None ⇒Missing child1 align = None ∩word = Per quanto riguarda ∩parent align = PREVEDERE ⇒Close child1 align = None ∩word = o ⇒Align(O) child1 align = None ∩word = su ∩child2 align mykind = location ∩child2 align = SICILIA ⇒Location Figure 5: Some rules extracted from the decision tree weakest. Then, each example is labeled with the strongest translation candidate available for it: thus, for example, if the connective word appears to be translated with a connective sign, and the words it connects are also aligned to signs which are close to each other syntactically, then the class is Sign, not Close. Our training data suffers from large imbalance between the “missing” class and the others. A classifier that simply labels all examples as “missing” would have an accuracy above 60%, and in fact, that is the classifier that we obtain if we attempt to automatically optimize the parameters of a Decision Tree (DT). We also note that, for connectives where both options are possible, choosing to translate them can make the sentence more verbose, but choosing to drop them risks losing part of the sentence’s meaning: the worse risk is the latter. Following accepted practice with unbalanced datasets (Chawla et al., 2004), we rebalanced the classes by duplicating all examples of the Align, Location and Close classes, but not those of the Missing class. On our data set of connectives with at least 10 occurrences, we trained a DT using AdaBoost (Freund and Schapire, 1997). The features include the word neighboring the connective in the Italian syntax tree, their aligned signs if any, part of speech tags, and semantic categories such as time or location. The resulting tree is very large, but we provide a few examples of the rules that can be extracted from it in Figure 5. Bootstrap evaluation shows our DT to have an accuracy of 83.58% ± 1.03%. In contrast, a baseline approach of taking the most common class for each connective results in an accuracy of 68.70% ± 0.88%. Furthermore, the baseline classifier has abysmal recall for the Close and Location classes (0.00% and 0.85%, respectively), which our DT greatly improves upon (86.73% and 75.32%). In order to estimate the impact of the lack of a LIS syntax tree in most of the corpus, we also learned and evaluated a DT using only the 1/4 of the corpus for which LIS syntax trees are available. The accuracy is 81.44% ± 2.03%, versus a baseline of 71.55% ± 1.74%. The recall for Close and Location is 89.22% and 73.58%, vs. 0.00% and 3.51% for the baseline. These results are comparable with the those obtained on the whole corpus, confirming that linear trees are a reasonable fallback. Both clustering and classification were performed using RapidMiner 5.3. 2 5 Conclusions and Future Work The small size of our corpus, with around 375 bilingual sentences, posed a large challenge to the use of statistical methods; on the other hand, having no access to a LIS speaker prevented us from simply relying on a rule-based approach. By combining syntax tree processing with several machine learning techniques, we were able to analyze the corpus and detect patterns that show linguistic substance. We have produced initial results in terms of rule extraction, and we will be integrating these rules into the full Italian-LIS translation system to produce improved translation of connectives. 6 Acknowledgements This work was supported by the ATLAS project, funded by Regione Piemonte within the “CIPE 2007” framework. Partial support to the authors was also provided by awards IIS 0905593 (from the NSF) and NPRP 5-939-1-155 (from the QNRF). A special thanks to A. Mazzei (ATLAS) for his willingness to answer our email bursts. Thanks to other members of ATLAS, in particular P. Prinetto, N. Bertoldi, C. Geraci, L. Lesmo; and to C. Soria, who extracted the list of potential connectives from the Italian Treebank. References Nadeem Ahmad, Davide Barberis, Nicola Garazzino, Paolo Prinetto, Umar Shoaib, and Gabriele Tiotto. 2012. A virtual character based italian sign language dictionary. In Proceedings of the Conference Universal Learning Design. Masaryk University. 2http://rapid-i.com/ 278 Abdulaziz Almohimeed, Mike Wald, and R.I. Damper. 2011. Arabic Text to Arabic Sign Language Translation System for the Deaf and Hearing-Impaired Community. In Proceedings of the Second Workshop on Speech and Language Processing for Assistive Technologies, pages 101–109, Edinburgh, Scotland, UK, July. Association for Computational Linguistics. Giuseppe Attardi, Felice Dell’Orletta, Maria Simi, Atanas Chanev, and Massimiliano Ciaramita. 2007. Multilingual dependency parsing and domain adaptation using DeSR. In Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL, pages 1112–1118. N Bertoldi, G Tiotto, P Prinetto, E Piccolo, F Nunnari, V Lombardo, A Mazzei, R Damiano, L Lesmo, and A Del Principe. 2010. On the creation and the annotation of a large-scale Italian-LIS parallel corpus. In 4th Workshop on the Representation and Processing of Sign Languages: Corpora and Sign Language Technologies, CSLT. Annelies Braffort, Laurence Bolot, E Chtelat-Pel, Annick Choisier, Maxime Delorme, Michael Filhol, J´er´emie Segouat, Cyril Verrecchia, Flora Badin, and Nad‘ege Devos. 2010. Sign language corpora for analysis, processing and evaluation. In Proc. of the Seventh International Conference on Language Resources and Evaluation (LREC’10). Nitesh V Chawla, Nathalie Japkowicz, and Aleksander Kotcz. 2004. Editorial: special issue on learning from imbalanced data sets. ACM SIGKDD Explorations Newsletter, 6(1):1–6. Massimiliano Ciaramita and Giuseppe Attardi. 2011. Dependency parsing with second-order feature maps and annotated semantic information. In H. Bunt, P. Merlo, and J. Nivre, editors, Trends in Parsing Technology, volume 43 of Text, Speech and Language Technology, pages 87–104. Springer. John Denero. 2007. Tailoring word alignments to syntactic machine translation. In In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL-2007, pages 17–24. Philippe Dreuw, Jens Forster, Yannick Gweth, Daniel Stein, Hermann Ney, Gregorio Martinez, Jaume Verges Llahi, Onno Crasborn, Ellen Ormel, Wei Du, et al. 2010. SignSpeak–understanding, recognition, and translation of sign languages. In The 4th Workshop on the Representation and Processing of Sign Languages: Corpora and Sign Language Technologies (CSLT 2010), pages 22–23, Valletta, Malta. Jason Eisner. 2003. Learning non-isomorphic tree mappings for machine translation. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics-Volume 2, pages 205–208. Association for Computational Linguistics. Angela Ferrari. 2008. Congiunzioni frasali, congiunzioni testuali e preposizioni: stessa logica, diversa testualit`a. In Emanuela Cresti, editor, Prospettive nello studio del lessico italiano, Atti del IX Congresso della Societ`a Internazionale di Linguistica e Filologia Italiana, pages 411–416, Florence, Italy. Firenze University Press. Yoav Freund and Robert E Schapire. 1997. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences, 55(1):119–139. Carlo Geraci, Marta Gozzi, Costanza Papagno, and Carlo Cecchetto. 2008. How grammar can cope with limited short-term memory: Simultaneity and seriality in sign languages. Cognition, 106(2):780– 804. Daniel Gildea. 2003. Loosely tree-based alignment for machine translation. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics-Volume 1, pages 80–87. Association for Computational Linguistics. Stuart Lloyd. 1982. Least squares quantization in pcm. Information Theory, IEEE Transactions on, 28(2):129–137. Vincenzo Lombardo, Fabrizio Nunnari, and Rossana Damiano. 2010. A virtual interpreter for the Italian Sign Language. In Proceedings of the 10th International Conference on Intelligent Virtual Agents, IVA’10, pages 201–207, Berlin, Heidelberg. Springer-Verlag. Vincenzo Lombardo, Cristina Battaglino, Rossana Damiano, and Fabrizio Nunnari. 2011. An avatarbased interface for the Italian Sign Language. In 2011 International Conference on Complex, Intelligent and Software Intensive Systems (CISIS), pages 589–594. IEEE. Ver´onica L´opez-Lude˜na, Rub´en San-Segundo, Syaheerah Lufti, Juan Manuel Lucas-Cuesta, Juli´an David Echevarry, and Beatriz Mart´ınezGonz´alez. 2011. Source Language Categorization for improving a Speech into Sign Language Translation System. In Proceedings of the Second Workshop on Speech and Language Processing for Assistive Technologies, pages 84–93, Edinburgh, Scotland, UK, July. Ver´onica L´opez-Lude˜na, Rub´en San-Segundo, Juan Manuel Montero, Ricardo C´ordoba, Javier Ferreiros, and Jos´e Manuel Pardo. 2011. Automatic categorization for improving Spanish into Spanish Sign Language machine translation. Computer Speech & Language. Pengfei Lu and Matt Huenerfauth. 2010. Collecting a Motion-Capture Corpus of American Sign Language for Data-Driven Generation Research. In Proceedings of the NAACL HLT 2010 Workshop on Speech and Language Processing for Assistive 279 Technologies, pages 89–97, Los Angeles, California, June. Association for Computational Linguistics. Pengfei Lu and Matt Huenerfauth. 2012. Learning a Vector-Based Model of American Sign Language Inflecting Verbs from Motion-Capture Data. In Proceedings of the Third Workshop on Speech and Language Processing for Assistive Technologies, pages 66–74, Montr´eal, Canada, June. Association for Computational Linguistics. James MacQueen. 1967. Some methods for classification and analysis of multivariate observations. In Proceedings of the fifth Berkeley symposium on mathematical statistics and probability, volume 1, page 14. California, USA. Jonathan May and Kevin Knight. 2007. Syntactic realignment models for machine translation. In Proceedings of EMNLP, pages 360–368. Alessandro Mazzei. 2012. Sign language generation with expert systems and ccg. In Proceedings of the Seventh International Natural Language Generation Conference, INLG ’12, pages 105–109, Stroudsburg, PA, USA. Association for Computational Linguistics. Simonetta Montemagni, Francesco Barsotti, Marco Battista, Nicoletta Calzolari, Ornella Corazzari, Alessandro Lenci, Antonio Zampolli, Francesca Fanciulli, Maria Massetani, Remo Raffaelli, et al. 2003. Building the Italian syntactic-semantic Treebank. Treebanks, pages 189–210. S. Morrissey, H. Somers, R. Smith, S. Gilchrist, and S. Dandapat. 2010. Building Sign Language Corpora for Use in Machine Translation. In 4th Workshop on the Representation and Processing of Sign Languages: Corpora and Sign Language Technologies (CSLT 2010), pages 172–178, Valletta, Malta. P. Prinetto, U. Shoaib, and G. Tiotto. 2011. The Italian Sign Language Sign Bank: Using WordNet for Sign Language corpus creation. In 2011 International Conference on Communications and Information Technology (ICCIT), pages 134–137. Orazio Romeo. 1991. Dizionario dei segni: la lingua dei segni in 1400 immagini. Zanichelli. Virginia Volterra. 1987. La lingua italiana dei segni: la comunicazione visivo-gestuale dei sordi. Il Mulino. 280
2013
27
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 281–290, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Stop-probability estimates computed on a large corpus improve Unsupervised Dependency Parsing David Mareˇcek and Milan Straka Charles University in Prague, Faculty of Mathematics and Physics Institute of Formal and Applied Linguistics Malostransk´e n´amˇest´ı 25, 11800 Prague, Czech Republic {marecek,straka}@ufal.mff.cuni.cz Abstract Even though the quality of unsupervised dependency parsers grows, they often fail in recognition of very basic dependencies. In this paper, we exploit a prior knowledge of STOP-probabilities (whether a given word has any children in a given direction), which is obtained from a large raw corpus using the reducibility principle. By incorporating this knowledge into Dependency Model with Valence, we managed to considerably outperform the state-of-theart results in terms of average attachment score over 20 treebanks from CoNLL 2006 and 2007 shared tasks. 1 Introduction The task of unsupervised dependency parsing (which strongly relates to the grammar induction task) has become popular in the last decade, and its quality has been greatly increasing during this period. The first implementation of Dependency Model with Valence (DMV) (Klein and Manning, 2004) with a simple inside-outside inference algorithm (Baker, 1979) achieved 36% attachment score on English and was the first system outperforming the adjacent-word baseline.1 Current attachment scores of state-of-the-art unsupervised parsers are higher than 50% for many languages (Spitkovsky et al., 2012; Blunsom and Cohn, 2010). This is still far below the supervised approaches, but their indisputable advantage is the fact that no annotated treebanks are needed and the induced structures are not burdened by any linguistic conventions. Moreover, 1The adjacent-word baseline is a dependency tree in which each word is attached to the previous (or the following) word. The attachment score of 35.9% on all the WSJ test sentences was taken from (Blunsom and Cohn, 2010). supervised parsers always only simulate the treebanks they were trained on, whereas unsupervised parsers have an ability to be fitted to different particular applications. Some of the current approaches are based on the DMV, a generative model where the grammar is expressed by two probability distributions: Pchoose(cd|ch, dir), which generates a new child cd attached to the head ch in the direction dir (left or right), and Pstop(STOP|ch, dir, · · · ), which makes a decision whether to generate another child of ch in the direction dir or not.2 Such a grammar is then inferred using sampling or variational methods. Unfortunately, there are still cases where the inferred grammar is very different from the grammar we would expect, e.g. verbs become leaves instead of governing the sentences. Rasooli and Faili (2012) and Bisk and Hockenmaier (2012) made some efforts to boost the verbocentricity of the inferred structures; however, both of the approaches require manual identification of the POS tags marking the verbs, which renders them useless when unsupervised POS tags are employed. The main contribution of this paper is a considerable improvement of unsupervised parsing quality by estimating the Pstop probabilities externally using a very large corpus, and employing this prior knowledge in the standard inference of DMV. The estimation is done using the reducibility principle introduced in (Mareˇcek and ˇZabokrtsk´y, 2012). The reducibility principle postulates that if a word (or a sequence of words) can be removed from a sentence without violating its grammatical correctness, it is a leaf (or a subtree) in its dependency structure. For the purposes of this paper, we assume the following hypothesis: If a sequence of words can be removed from 2The Pstop probability may be conditioned by additional parameters, such as adjacency adj or fringe word cf, which will be described in Section 4. 281 Figure 1: Example of a dependency tree. Sequences of words that can be reduced are underlined. a sentence without violating its grammatical correctness, no word outside the sequence depends on any word in the sequence. Our hypothesis is a generalization of the original hypothesis since it allows a reducible sequence to form several adjacent subtrees. Let’s outline the connection between the Pstop probabilities and the property of reducibility. Figure 1 shows an example of a dependency tree. Sequences of reducible words are marked by thick lines below the sentence. Consider for example the word “further”. It can be removed and thus, according to our hypothesis, no other word depends on it. Therefore, we can deduce that the Pstop probability for such word is high both for the left and for the right direction. The phrase “for further discussions” is reducible as well and we can deduce that the Pstop of its first word (“for”) in the left direction is high since it cannot have any left children. We do not know anything about its right children, because they can be located within the sequence (and there is really one in Figure 1). Similarly, the word “discussions”, which is the last word in this sequence, cannot have any right children and we can estimate that its right Pstop probability is high. On the other hand, non-reducible words such, as the verb “asked” in our example, can have children, and therefore their Pstop can be estimated as low for both directions. The most difficult task in this approach is to automatically recognize reducible sequences. This problem, together with the estimation of the stopprobabilities, is described in Section 3. Our model, not much different from the classic DMV, is introduced in Section 4. Section 5 describes the inference algorithm based on Gibbs sampling. Experiments and results are discussed in Section 6. Section 7 concludes the paper. 2 Related Work Reducibility: The notion of reducibility belongs to the traditional linguistic criteria for recognizing dependency relations. As mentioned e.g. by K¨ubler et al. (2009), the head h of a construction c determines the syntactic category of c and can often replace c. In other words, the descendants of h can be often removed without making the sentence incorrect. Similarly, in the Dependency Analysis by Reduction (Lopatkov´a et al., 2005), the authors assume that stepwise deletions of dependent elements within a sentence preserve its syntactic correctness. A similar idea of dependency analysis by splitting a sentence into all possible acceptable fragments is used by Gerdes and Kahane (2011). We have directly utilized the aforementioned criteria for dependency relations in unsupervised dependency parsing in our previous paper (Mareˇcek and ˇZabokrtsk´y, 2012). Our dependency model contained a submodel which directly prioritized subtrees that form reducible sequences of POS tags. Reducibility scores of given POS tag sequences were estimated using a large corpus of Wikipedia articles. The weakness of this approach was the fact that longer sequences of POS tags are very sparse and no reducibility scores could be estimated for them. In this paper, we avoid this shortcoming by estimating the STOP probabilities for individual POS tags only. Another task related to reducibility is sentence compression (Knight and Marcu, 2002; Cohn and Lapata, 2008), which was used for text summarization. The task is to shorten the sentences while retaining the most important pieces of information, using the knowledge of the grammar. Conversely, our task is to induce the grammar using the sentences and their shortened versions. Dependency Model with Valence (DMV) has been the most popular approach to unsupervised dependency parsing in the recent years. It was introduced by Klein and Manning (2004) and further improved by Smith (2007) and Cohen et al. (2008). Headden III et al. (2009) introduce the Extended Valence Grammar and add lexicalization and smoothing. Blunsom and Cohn (2010) use tree substitution grammars, which allow learning of larger dependency fragments by employing the Pitman-Yor process. Spitkovsky et al. (2010) improve the inference using iterated learning of increasingly longer sentences. Further improvements were achieved by better dealing with punctuation (Spitkovsky et al., 2011b) and new “boundary” models (Spitkovsky et al., 2012). 282 Other approaches to unsupervised dependency parsing were described e.g. in (Søgaard, 2011), (Cohen et al., 2011), and (Bisk and Hockenmaier, 2012). There also exist “less unsupervised” approaches that utilize an external knowledge of the POS tagset. For example, Rasooli and Faili (2012) identify the last verb in the sentence, minimize its probability of reduction and thus push it to the root position. Naseem et al. (2010) make use of manually-specified universal dependency rules such as Verb→Noun, Noun→Adjective. McDonald et al. (2011) identify the POS tags by a crosslingual transfer. Such approaches achieve better results; however, they are useless for grammar induction from plain text. 3 STOP-probability estimation 3.1 Recognition of reducible sequences We introduced a simple procedure for recognition of reducible sequences in (Mareˇcek and ˇZabokrtsk´y, 2012): The particular sequence of words is removed from the sentence and if the remainder of the sentence exists elsewhere in the corpus, the sequence is considered reducible. We provide an example in Figure 2. The bigram “this weekend” in the sentence “The next competition is this weekend at Lillehammer in Norway.” is reducible since the same sentence without this bigram, i.e., “The next competition is at Lillehammer in Norway.”, is in the corpus as well. Similarly, the prepositional phrase “of Switzerland” is also reducible. It is apparent that only very few reducible sequences can be found by this procedure. If we use a corpus containing about 10,000 sentences, it is possible that we found no reducible sequences at all. However, we managed to find a sufficient amount of reducible sequences in corpora containing millions of sentences, see Section 6.1 and Table 1. 3.2 Computing the STOP-probability estimations Recall our hypothesis from Section 1: If a sequence of words is reducible, no word outside the sequence can depend on any word in the sequence. Or, in terms of dependency structure: A reducible sequence consists of one or more adjacent subtrees. This means that the first word of a reducible sequence does not have any left children and, similarly, the last word in a reducible sequence does Martin Fourcade was sixth , maintaining his lead at the top of the overall World Cup standings , although Svendsen is now only 59 points away from the Frenchman in second . The next competition is this weekend at Lillehammer in Norway . Larinto saw off allcomers at Kuopio with jumps of 129.5 and 124m for a total 240.9 points , just 0.1 points ahead of compatriot Matti Hautamaeki , who landed efforts of 127 and 129.5m . Third place went to Simon Ammann . Andreas Kofler , who won at the weekend at Kuusamo , was fourth but stays top of the season standings with 150 points . Third place went to Simon Ammann of Switzerland . Ammann is currently just fifth , overall with 120 points . The next competition is at Lillehammer in Norway . Figure 2: Example of reducible sequences of words found in a large corpus. not have any right children. We make use of this property directly for estimating Pstop probabilities. Hereinafter, P est stop(ch, dir) denotes the STOPprobability we want to estimate from a large corpus; ch is the head’s POS tag and dir is the direction in which the STOP probability is estimated. If ch is very often in the first position of reducible sequences, P est stop(ch, left) will be high. Similarly, if ch is often in the last position of reducible sequences, P est stop(ch, right) will be high. For each POS tag ch in the given corpus, we first compute its left and right “raw” score Sstop(ch, left) and Sstop(ch, right) as the relative number of times a word with POS tag ch was in the first (or last) position in a reducible sequence found in the corpus. We do not deal with sequences longer than a trigram since they are highly biased. Sstop(ch, left) = # red.seq. [ch, . . . ] + λ # ch in the corpus Sstop(ch, right) = # red.seq. [. . . , ch] + λ # ch in the corpus Note that the Sstop scores are not probabilities. Their main purpose is to sort the POS tags according to their “reducibility”. It may happen that for many POS tags there are no reducible sequences found. To avoid zero scores, we use a simple smoothing by adding λ to each count: λ = # all reducible sequences W , 283 where W denotes the number of words in the given corpus. Such smoothing ensures that more frequent irreducible POS tags get a lower Sstop score than the less frequent ones. Since reducible sequences found are very sparse, the values of Sstop(ch, dir) scores are very small. To convert them to estimated probabilities P est stop(ch, dir), we need a smoothing that fulfills the following properties: (1) P est stop is a probability and therefore its value must be between 0 and 1. (2) The number of no-stop decisions (no matter in which direction) equals to W (number of words) since such decision is made before each word is generated. The number of stop decisions is 2W since they come after generating the last children in both the directions. Therefore, the average P est stop(h, dir) over all words in the treebank should be 2/3. After some experimenting, we chose the following normalization formula P est stop(ch, dir) = Sstop(ch, dir) Sstop(ch, dir) + ν with a normalization constant ν. The condition (1) is fulfilled for any positive value of ν. Its exact value is set in accordance with the requirement (2) so that the average value of P est stop is 2/3. X dir∈{l,r} X c∈C count(c)P est stop(c, dir) = 2 3 · 2W, where count(c) is the number of words with POS tag c in the corpus. We find the unique value of ν that fulfills the previous equation numerically using a binary search algorithm. 4 Model We use the standard generative Dependency Model with Valence (Klein and Manning, 2004). The generative story is the following: First, the head of the sentence is generated. Then, for each head, all its left children are generated, then the left STOP, then all its right children, and then the right STOP. When a child is generated, the algorithm immediately recurses to generate its subtree. When deciding whether to generate another child in the direction dir or the STOP symbol, we use the P dmv stop (STOP|ch, dir, adj, cf) model. The new child cd in the direction dir is generated according to the Pchoose(cd|ch, dir) model. The probability of the whole dependency tree T is the following: Ptree(T) = Pchoose(head(T)|ROOT, right) · Ptree(D(head(T))) Ptree(D(ch)) = Y dir∈{l,r} Y cd∈ deps(dir,h) P dmv stop (¬STOP|ch, dir, adj, cf) Pchoose(cd|ch, dir)Ptree(D(cd)) P dmv stop (STOP|ch, dir, adj, cf), where Ptree(D(ch)) is probability of the subtree governed by h in the tree T. The set of features on which the P dmv stop and Pchoose probabilities are conditioned varies among the previous works. Our P dmv stop depends on the head POS tag ch, direction dir, adjacency adj, and fringe POS tag cf (described below). The use of adjacency is standard in DMV and enables us to have different P dmv stop for situations when no child was generated so far (adj = 1). That is, P dmv stop (ch, dir, adj = 1, cf) decides whether the word ch has any children in the direction dir at all, whereas P dmv stop (h, dir, adj = 0, cf) decides whether another child will be generated next to the already generated one. This distinction is of crucial importance for us: although we know how to estimate the STOP probabilities for adj = 1 from large data, we do not know anything about the STOP probabilities for adj = 0. The last factor cf, called fringe, is the POS tag of the previously generated sibling in the current direction dir. If there is no such sibling (in case adj = 1), the head ch is used as the fringe cf. This is a relatively novel idea in DMV, introduced by Spitkovsky et al. (2012). We decided to use the fringe word in our model since it gives slightly better results. We assume that the distributions of Pchoose and P dmv stop are good if the majority of the probability mass is concentrated on few factors; therefore, we apply a Chinese Restaurant process (CRP) on them. The probability of generating a new child node cd attached to ch in the direction dir given the history (all the nodes we have generated so far) is 284 computed using the following formula: Pchoose(cd|ch, dir) = = αc 1 |C| + count−(cd, ch, dir) αc + count−(ch, dir) , where count−(cd, ch, dir) denotes the number of times a child node cd has been attached to ch in the direction dir in the history. Similarly, count−(ch, dir) is the number of times something has been attached to ch in the direction dir. The αc is a hyperparameter and |C| is the number of distinct POS tags in the corpus.3 The STOP probability is computed in a similar way: P dmv stop (STOP|ch, dir, adj, cf) = =αs 2 3 + count−(STOP, ch, dir, adj, cf) αs + count−(ch, dir, adj, cf) where count−(STOP, ch, dir, adj, cf) is the number of times a head ch had the last child cf in the direction dir in the history. The contribution of this paper is the inclusion of the stop-probability estimates into the DMV. Therefore, we introduce a new model P dmv+est stop , in which the probability based on the previously generated data is linearly combined with the probability estimates based on large corpora (Section 3). P dmv+est stop (STOP|ch, dir, 1, cf) = = (1 −β) · αs 2 3 + count−(STOP, ch, dir, 1, cf) αs + count−(ch, dir, 1, cf) + β · P est stop(ch, dir) P dmv+est stop (STOP|ch, dir, 0, cf) = = P dmv stop (STOP|ch, dir, 0, cf) The hyperparameter β defines the ratio between the CRP-based and estimation-based probability. The definition of the P dmv+est stop for adj = 0 equals the basic P dmv stop since we are able to estimate only the probability whether a particular head POS tag ch can or cannot have children in a particular direction, i.e if adj = 1. 3The number of classes |C| is often used in the denominator. We decided to put its reverse value into the numerator since we observed such model to perform better for a constant value of αc over different languages and tagsets. Finally, we obtain the probability of the whole generated treebank as a product over the trees: Ptreebank = Y T∈treebank Ptree(T). An important property of the CRP is the fact that the factors are exchangeable – i.e. no matter how the trees are ordered in the treebank, the Ptreebank is always the same. 5 Inference We employ the Gibbs sampling algorithm (Gilks et al., 1996). Unlike in (Mareˇcek and ˇZabokrtsk´y, 2012), where edges were sampled individually, we sample whole trees from all possibilities on a given sentence using dynamic programming. The algorithm works as follows: 1. A random projective dependency tree is assigned to each sentence in the corpus. 2. Sampling: We go through the sentences in a random order. For each sentence, we sample a new dependency tree based on all other trees that are currently in the corpus. 3. Step 2 is repeated in many iterations. In this work, the number of iterations was set to 1000. 4. After the burn-in period (which was set to the first 500 iterations), we start collecting counts of edges between particular words that appeared during the sampling. 5. Parsing: Based on the collected counts, we compute the final dependency trees using the Chu-Liu/Edmonds’ algorithm (1965) for finding maximum directed spanning trees. 5.1 Sampling Our goal is to sample a new projective dependency tree T with probability proportional to Ptree(T). Since the factors are exchangeable, we can deal with any tree as if it was the last one in the corpus. We use dynamic programming to sample a tree with N nodes in O(N4) time. Nevertheless, we sample trees using a modified probability P ′ tree(T). In Ptree(T), the probability of an edge depends on counts of all other edges, including the edges in the same tree. We instead use P ′ tree(T), where the counts are computed using only the other trees in the corpus, i.e., probabilities 285 of edges of T are independent. There is a standard way to sample using the real Ptree(T) – we can use P ′ tree(T) as a proposal distribution in the Metropolis-Hastings algorithm (Hastings, 1970), which then produces trees with probabilities proportional to Ptree(T) using acceptance-rejection scheme. We do not take this approach and we sample proportionally to P ′ tree(T) only, because we believe that for large enough corpora, the two distributions are nearly identical. To sample a tree containing words w1, . . . , wN with probability proportional to P ′ tree(T), we first compute three tables: • ti(g, i, j) for g < i or g > j is the sum of probabilities of any tree on words wi, . . . , wj whose root is a child of wg, but not an outermost child in its direction; • to(g, i, j) is the same, but the tree is the outermost child of wg; • fo(g, i, j) for g < i or g > j is the sum of probabilities of any forest on words wi, . . . , wj, such that all the trees are children of wg and are the outermost children of wg in their direction. All the probabilities are computed using the P ′ tree. If we compute the tables inductively from the smallest trees to the largest trees, we can precompute all the O(N3) values in O(N4) time. Using these tables, we sample the tree recursively, starting from the root. At first, we sample the root r proportionally to the probability of a tree with the root r, which is a product of the probability of left children of r and right children of r. The probability of left children of r is either P ′ stop(STOP|r, left) if r has no children, or P ′ stop(¬STOP|r, left)fo(r, 1, r −1) otherwise; the probability of right children is analogous. After sampling the root, we sample the ranges of its left children, if any. We sample the first left child range l1 proportionally either to to(r, 1, r−1) if l1 = 1, or to ti(r, l1, r −1)fo(r, 1, l1 −1) if l1 > 1. Then we sample the second left child range l2 proportionally either to to(r, 1, l1 −1) if l2 = 1, or to ti(r, l2, l1 −1)fo(r, 1, l2 −1) if l2 > 1, and so on, while there are any left children. The right children ranges are sampled similarly. Finally, we recursively sample the children, i.e., their roots, their children and so on. It is simple to verify using the definition of Ptree that the described method indeed samples trees proportionally to P ′ tree. 5.2 Parsing Beginning the 500th iteration, we start collecting counts of individual dependency edges during the remaining iterations. After each iteration is finished (all the trees in the corpus are re-sampled), we increment the counter of all directed pairs of nodes which are connected by a dependency edge in the current trees. After the last iteration, we use these collected counts as weights and compute maximum directed spanning trees using the Chu-Liu/Edmonds’ algorithm (Chu and Liu, 1965). Therefore, the resulting trees consist of edges maximizing the sum of individual counts: TMST = arg max T X e∈T count(e) It is important to note that the MST algorithm may produce non-projective trees. Even if we average the strictly projective dependency trees, some non-projective edges may appear in the result. This might be an advantage since correct non-projective edges can be predicted; however, this relaxation may introduce mistakes as well. 6 Experiments 6.1 Data We use two types of resources in our experiments. The first type are CoNLL treebanks from the year 2006 (Buchholz and Marsi, 2006) and 2007 (Nivre et al., 2007), which we use for inference and for evaluation. As is the standard practice in unsupervised parsing evaluation, we removed all punctuation marks from the trees. In case a punctuation node was not a leaf, its children are attached to the parent of the removed node. For estimating the STOP probabilities (Section 3), we use the Wikipedia articles from W2C corpus (Majliˇs and ˇZabokrtsk´y, 2012), which provide sufficient amount of data for our purposes. Statistics across languages are shown in Table 1. The Wikipedia texts were automatically tokenized and segmented to sentences so that their tokenization was similar to the one in the CoNLL evaluation treebanks. Unfortunately, we were not able to find any segmenter for Chinese that would produce a desired segmentation; therefore, we removed Chinese from evaluation. The next step was to provide the Wikipedia texts with POS tags. We employed the TnT tagger (Brants, 2000) which was trained on the re286 language tokens red. language tokens red. (mil.) seq. (mil.) seq. Arabic 19.7 546 Greek 20.9 1037 Basque 14.1 645 Hungarian 26.3 2237 Bulgarian 18.8 1808 Italian 39.7 723 Catalan 27.0 712 Japanese 2.6 31 Czech 20.3 930 Portuguese 31.7 4765 Danish 15.9 576 Slovenian 13.7 513 Dutch 27.1 880 Spanish 53.4 1156 English 85.0 7603 Swedish 19.2 481 German 56.9 1488 Turkish 16.5 5706 Table 1: Wikipedia texts statistics: total number of tokens and number of reducible sequences found in them. spective CoNLL training data. The quality of such tagging is not very high since we do not use any lexicons or pretrained models. However, it is sufficient for obtaining usable stop probability estimates. 6.2 Estimated STOP probabilities We applied the algorithm described in Section 3 on the prepared Wikipedia corpora and obtained the stop-probabilities P est stop in both directions for all the languages and their POS tags. To evaluate the quality of our estimations, we compare them with P tb stop, the stop probabilities computed directly on the evaluation treebanks. The comparisons on five selected languages are shown in Figure 3. The individual points represent the individual POS tags, their size (area) shows their frequency in the particular treebank. The y-axis shows the stop probabilities estimated on Wikipedia by our algorithm, while the x-axis shows the stop probabilities computed on the evaluation CoNLL data. Ideally, the computed and estimated stop probabilities should be the same, i.e. all the points should be on the diagonal. Let’s focus on the graphs for English. Our method correctly recognizes that adverbs RB and adjectives JJ are often leaves (their stop probabilities in both directions are very high). Moreover, the estimates for RB are even higher than JJ, which will contribute to attaching adverbs to adjectives and not reversely. Nouns (NN, NNS) are somewhere in the middle, the stop probabilities for proper nouns (NNP) are estimated higher, which is correct since they have much less modifiers then the common nouns NN. The determiners are more problematic. Their estimated stop probability is not very high (about 0.65), while in the real treebank they are almost always leaves. 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 estimation from Wiki estimation computed on the treebank English left-stop English right-stop NN NNP IN DT NNS JJ VBD RB NN NNP IN DT NNS JJ VBD RB 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 German left-stop German right-stop Spanish left-stop Spanish right-stop Czech left-stop Czech right-stop 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Hungarian left-stop Hungarian right-stop estimation from Wiki estimation from Wiki estimation from Wiki estimation from Wiki NN Db Z: J^ RR VB AA Vf Vp C= NN VVFIN APPR ART ADV VAFIN ADJA NN Db Z: J^ RR VB AA Vf Vp C= NN ART APPR ADJA ADV VAFIN VVFIN NE nc da np sp aq vm Fc cc nc da np sp aq vm Fc cc Nc Af Tf Np Vm Cc Wpunc Nc Af Wpunc Tf Np Vm Cc estimation computed on the treebank estimation computed on the treebank estimation computed on the treebank estimation computed on the treebank Figure 3: Comparison of P est stop probabilities estimated from raw Wikipedia corpora (y-axis) and of P tb stop probabilities computed from CoNLL treebanks (x-axis). The area of each point shows the relative frequency of an individual tag. 287 This is caused by the fact that determiners are often obligatory in English and cannot be simply removed as, e.g., adjectives. The stop probabilities of prepositions (IN) are also very well recognized. While their left-stop is very probable (prepositions always start prepositional phrases), their right-stop probability is very low. The verbs (the most frequent verbal tag is VBD) have very low both right and left-stop probabilities. Our estimation assigns them the stop probability about 0.3 in both directions. This is quite high, but still, it is one of the lowest among other more frequent tags, and thus verbs tend to be the roots of the dependency trees. We could make similar analyses for other languages, but due to space reasons we only provide graphs for Czech, German, Spanish, and Hungarian in Figure 3. 6.3 Settings After a manual tuning, we have set our hyperparameters to the following values: αc = 50, αs = 1, β = 1/3 We have also found that the Gibbs sampler does not always converge to a similar grammar. For a couple of languages, the individual runs end up with very different trees. To prevent such differences, we run each inference 50 times and take the run with the highest final Ptreebank (see Section 4) for the evaluation. 6.4 Results Table 2 shows the results of our unsupervised parser and compares them with results previously reported in other works. In order to see the impact of using the estimated stop probabilities (using model P dmv+est stop ), we provide results for classical DMV (using model P dmv stop ) as well. We do not provide results for Chinese since we do not have any appropriate tokenizer at our disposal (see Section 3), and also for Turkish from CoNLL 2006 since the data is not available to us. We now focus on the third and fourth column of Table 2. The addition of estimated stop probabilities based on large corpora improves the parsing accuracy on 15 out of 20 treebanks. In many cases, the improvement is substantial, which means that the estimated stop probabilities forced the model to completely rebuild the structures. For example, in Bulgarian, if the P dmv stop model is used, all the prepositions are leaves and the verbs seldom govern sentences. If the P dmv+est stop model is used, prepositions correctly govern nouns and verbs move to roots. We observe similar changes on Swedish as well. Unfortunately, there are also negative examples, such as Hungarian, where the addition of the estimated stop probabilities decreases the attachment score from 60.1% to 34%. This is probably caused by not very good estimates of the right-stop probability (see the last graph in Figure 3). Nevertheless, the estimated stop probabilities increase the average score over all the treebanks by more than 12% and therefore prove its usefulness. In the last two columns of Table 2, we provide results of two other works reported in the last year. The first one (spi12) is the DMV-based grammar inducer by Spitkovsky et al. (2012),4 the second one (mar12) is our previous work (Mareˇcek and ˇZabokrtsk´y, 2012). Comparing with (Spitkovsky et al., 2012), our parser reached better accuracy on 12 out of 20 treebanks. Although this might not seem as a big improvement, if we compare the average scores over the treebanks, our system significantly wins by more than 6%. The second system (mar12) outperforms our parser only on one treebank (on Italian by less than 3%) and its average score over all the treebanks is only 40%, i.e., more than 8% lower than the average score of our parser. To see the theoretical upper bound of our model performance, we replaced the P est stop estimates by the P tb stop estimates computed from the evaluation treebanks and run the same inference algorithm with the same setting. The average attachment score of such reference DMV is almost 65%. This shows a huge space in which the estimation of STOP probabilities could be further improved. 7 Conclusions and Future Work In this work, we studied the possibility of estimating the DMV stop-probabilities from a large raw corpus. We proved that such prior knowledge about stop-probabilities incorporated into the standard DMV model significantly improves the unsupervised dependency parsing and, since we are not aware of any other fully unsupervised dependency parser with higher average attachment score over CoNLL data, we state that we reached a new stateof-the-art result.5 4Possibly the current state-of-the-art results. They were compared with many previous works. 5A possible competitive work may be the work by Blunsom and Cohn (2010), who reached 55% accuracy on English as well. However, they do not provide scores measured on other CoNLL treebanks. 288 CoNLL this work other systems language year P dmv stop P dmv+est stop reference P dmv+tb stop spi12 mar12 Arabic 06 10.6 (±8.7) 38.2 (±0.5) 61.2 10.9 26.5 Arabic 07 22.0 (±0.1) 35.3 (±0.2) 65.3 44.9 27.9 Basque 07 41.1 (±0.2) 35.5 (±0.2) 52.3 33.3 26.8 Bulgarian 06 25.9 (±1.4) 54.9 (±0.2) 73.2 65.2 46.0 Catalan 07 34.9 (±3.4) 67.0 (±1.7) 72.0 62.1 47.0 Czech 06 32.3 (±3.8) 52.4 (±5.2) 64.0 55.1 49.5 Czech 07 32.9 (±0.8) 51.9 (±5.2) 62.1 54.2 48.0 Danish 06 30.8 (±4.3) 41.6 (±1.1) 60.0 22.2 38.6 Dutch 06 25.7 (±5.7) 47.5 (±0.4) 58.9 46.6 44.2 English 07 36.5 (±5.9) 55.4 (±0.2) 63.7 29.6 49.2 German 06 29.9 (±4.6) 52.4 (±0.7) 65.5 39.1 44.8 Greek 07 42.5 (±6.0) 26.3 (±0.1) 64.7 26.9 20.2 Hungarian 07 60.8 (±0.2) 34.0 (±0.3) 68.3 58.2 51.8 Italian 07 34.5 (±0.3) 39.4 (±0.5) 64.5 40.7 43.3 Japanese 06 64.8 (±3.4) 61.2 (±1.7) 76.4 22.7 50.8 Portuguese 06 35.7 (±4.3) 69.6 (±0.1) 77.3 72.4 50.6 Slovenian 06 50.1 (±0.2) 35.7 (±0.2) 50.2 35.2 18.1 Spanish 06 38.1 (±5.9) 61.1 (±0.1) 65.6 28.2 51.9 Swedish 06 28.0 (±2.3) 54.5 (±0.4) 61.6 50.7 48.2 Turkish 07 51.6 (±5.5) 56.9 (±0.2) 67.0 44.8 15.7 Average: 36.4 48.7 64.7 42.2 40.0 Table 2: Attachment scores on CoNLL 2006 and 2007 data. Standard deviations are provided in brackets. DMV model using standard P dmv stop probability is compared with DMV with P dmv+est stop , which incorporates STOP estimations based on reducibility principle. The reference DMV uses P tb stop, which are computed directly on the treebanks. The results reported in previous works by Spitkovsky et al. (2012), and Mareˇcek and ˇZabokrtsk´y (2012) follows. In future work, we would like to focus on unsupervised parsing without gold POS tags (see e.g. Spitkovsky et al. (2011a) and Christodoulopoulos et al. (2012)). We suppose that many of the current works on unsupervised dependency parsers use gold POS tags only as a simplification of this task, and that the ultimate purpose of this effort is to develop a fully unsupervised induction of linguistic structure from raw texts that would be useful across many languages, domains, and applications. The software which implements the algorithms described in this paper, together with P est stop estimations computed on Wikipedia texts, can be downloaded at http://ufal.mff.cuni.cz/˜marecek/udp/. Acknowledgments This work has been supported by the AMALACH grant (DF12P01OVV02) of the Ministry of Culture of the Czech Republic. Data and some tools used as a prerequisite for the research described herein have been provided by the LINDAT/CLARIN Large Infrastructural project, No. LM2010013 of the Ministry of Education, Youth and Sports of the Czech Republic. We would like to thank Martin Popel, Zdenˇek ˇZabokrtsk´y, Rudolf Rosa, and three anonymous reviewers for many useful comments on the manuscript of this paper. References James K. Baker. 1979. Trainable grammars for speech recognition. In Speech communication papers presented at the 97th Meeting of the Acoustical Society, pages 547– 550. Yonatan Bisk and Julia Hockenmaier. 2012. Induction of linguistic structure with combinatory categorial grammars. The NAACL-HLT Workshop on the Induction of Linguistic Structure, page 90. Phil Blunsom and Trevor Cohn. 2010. Unsupervised induction of tree substitution grammars for dependency parsing. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP ’10, pages 1204–1213, Stroudsburg, PA, USA. Association for Computational Linguistics. Thorsten Brants. 2000. TnT - A Statistical Part-of-Speech Tagger. Proceedings of the sixth conference on Applied natural language processing, page 8. Sabine Buchholz and Erwin Marsi. 2006. CoNLL-X shared task on multilingual dependency parsing. In Proceedings of the Tenth Conference on Computational Natural Language Learning, CoNLL-X ’06, pages 149–164, Stroudsburg, PA, USA. Association for Computational Linguistics. Christos Christodoulopoulos, Sharon Goldwater, and Mark Steedman. 2012. Turning the pipeline into a loop: Iterated unsupervised dependency parsing and PoS induction. In Proceedings of the NAACL-HLT Workshop on the Induction of Linguistic Structure, pages 96–99, June. Y. J. Chu and T. H. Liu. 1965. On the Shortest Arborescence of a Directed Graph. Science Sinica, 14:1396–1400. 289 Shay B. Cohen, Kevin Gimpel, and Noah A. Smith. 2008. Logistic normal priors for unsupervised probabilistic grammar induction. In Neural Information Processing Systems, pages 321–328. Shay B. Cohen, Dipanjan Das, and Noah A. Smith. 2011. Unsupervised structure prediction with non-parallel multilingual guidance. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ’11, pages 50–61, Stroudsburg, PA, USA. Association for Computational Linguistics. Trevor Cohn and Mirella Lapata. 2008. Sentence compression beyond word deletion. In Proceedings of the 22nd International Conference on Computational Linguistics Volume 1, COLING ’08, pages 137–144, Stroudsburg, PA, USA. Association for Computational Linguistics. Kim Gerdes and Sylvain Kahane. 2011. Defining dependencies (and constituents). In Proceedings of Dependency Linguistics 2011, Barcelona. Walter R. Gilks, S. Richardson, and David J. Spiegelhalter. 1996. Markov chain Monte Carlo in practice. Interdisciplinary statistics. Chapman & Hall. W. Keith Hastings. 1970. Monte carlo sampling methods using markov chains and their applications. Biometrika, 57(1):pp. 97–109. William P. Headden III, Mark Johnson, and David McClosky. 2009. Improving unsupervised dependency parsing with richer contexts and smoothing. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, NAACL ’09, pages 101– 109, Stroudsburg, PA, USA. Association for Computational Linguistics. Dan Klein and Christopher D. Manning. 2004. Corpusbased induction of syntactic structure: models of dependency and constituency. In Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics, ACL ’04, Stroudsburg, PA, USA. Association for Computational Linguistics. Kevin Knight and Daniel Marcu. 2002. Summarization beyond sentence extraction: a probabilistic approach to sentence compression. Artif. Intell., 139(1):91–107, July. Sandra K¨ubler, Ryan T. McDonald, and Joakim Nivre. 2009. Dependency Parsing. Synthesis Lectures on Human Language Technologies. Morgan & Claypool Publishers. Mark´eta Lopatkov´a, Martin Pl´atek, and Vladislav Kuboˇn. 2005. Modeling syntax of free word-order languages: Dependency analysis by reduction. In V´aclav Matouˇsek, Pavel Mautner, and Tom´aˇs Pavelka, editors, Lecture Notes in Artificial Intelligence, Proceedings of the 8th International Conference, TSD 2005, volume 3658 of Lecture Notes in Computer Science, pages 140–147, Berlin / Heidelberg. Springer. Martin Majliˇs and Zdenˇek ˇZabokrtsk´y. 2012. Language richness of the web. In Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC 2012), Istanbul, Turkey, May. European Language Resources Association (ELRA). David Mareˇcek and Zdenˇek ˇZabokrtsk´y. 2012. Exploiting reducibility in unsupervised dependency parsing. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, EMNLP-CoNLL ’12, pages 297–307, Stroudsburg, PA, USA. Association for Computational Linguistics. Ryan McDonald, Slav Petrov, and Keith Hall. 2011. Multisource transfer of delexicalized dependency parsers. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 62–72, Edinburgh, Scotland, UK., July. Association for Computational Linguistics. Tahira Naseem, Harr Chen, Regina Barzilay, and Mark Johnson. 2010. Using universal linguistic knowledge to guide grammar induction. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP ’10, pages 1234–1244, Stroudsburg, PA, USA. Association for Computational Linguistics. Joakim Nivre, Johan Hall, Sandra K¨ubler, Ryan McDonald, Jens Nilsson, Sebastian Riedel, and Deniz Yuret. 2007. The CoNLL 2007 Shared Task on Dependency Parsing. In Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL 2007, pages 915–932, Prague, Czech Republic, June. Association for Computational Linguistics. Mohammad Sadegh Rasooli and Heshaam Faili. 2012. Fast unsupervised dependency parsing with arc-standard transitions. In Proceedings of ROBUS-UNSUP, pages 1–9. Noah Ashton Smith. 2007. Novel estimation methods for unsupervised discovery of latent structure in natural language text. Ph.D. thesis, Baltimore, MD, USA. AAI3240799. Anders Søgaard. 2011. From ranked words to dependency trees: two-stage unsupervised non-projective dependency parsing. In Proceedings of TextGraphs-6: Graph-based Methods for Natural Language Processing, TextGraphs6, pages 60–68, Stroudsburg, PA, USA. Association for Computational Linguistics. Valentin I. Spitkovsky, Hiyan Alshawi, and Daniel Jurafsky. 2010. From baby steps to leapfrog: how ”less is more” in unsupervised dependency parsing. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT ’10, pages 751–759, Stroudsburg, PA, USA. Association for Computational Linguistics. Valentin I. Spitkovsky, Hiyan Alshawi, Angel X. Chang, and Daniel Jurafsky. 2011a. Unsupervised dependency parsing without gold part-of-speech tags. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing (EMNLP 2011). Valentin I. Spitkovsky, Hiyan Alshawi, and Daniel Jurafsky. 2011b. Punctuation: Making a point in unsupervised dependency parsing. In Proceedings of the Fifteenth Conference on Computational Natural Language Learning (CoNLL-2011). Valentin I. Spitkovsky, Hiyan Alshawi, and Daniel Jurafsky. 2012. Three Dependency-and-Boundary Models for Grammar Induction. In Proceedings of the 2012 Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL 2012). 290
2013
28
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 291–301, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Transfer Learning for Constituency-Based Grammars Yuan Zhang, Regina Barzilay Massachusetts Institute of Technology {yuanzh, regina}@csail.mit.edu Amir Globerson The Hebrew University [email protected] Abstract In this paper, we consider the problem of cross-formalism transfer in parsing. We are interested in parsing constituencybased grammars such as HPSG and CCG using a small amount of data specific for the target formalism, and a large quantity of coarse CFG annotations from the Penn Treebank. While all of the target formalisms share a similar basic syntactic structure with Penn Treebank CFG, they also encode additional constraints and semantic features. To handle this apparent discrepancy, we design a probabilistic model that jointly generates CFG and target formalism parses. The model includes features of both parses, allowing transfer between the formalisms, while preserving parsing efficiency. We evaluate our approach on three constituency-based grammars — CCG, HPSG, and LFG, augmented with the Penn Treebank-1. Our experiments show that across all three formalisms, the target parsers significantly benefit from the coarse annotations.1 1 Introduction Over the last several decades, linguists have introduced many different grammars for describing the syntax of natural languages. Moreover, the ongoing process of developing new formalisms is intrinsic to linguistic research. However, before these grammars can be used for statistical parsing, they require annotated sentences for training. The difficulty of obtaining such annotations is a key limiting factor that inhibits the effective use of these grammars. 1The source code for the work is available at http://groups.csail.mit.edu/rbg/code/ grammar/acl2013. The standard solution to this bottleneck has relied on manually crafted transformation rules that map readily available syntactic annotations (e.g, the Penn Treebank) to the desired formalism. Designing these transformation rules is a major undertaking which requires multiple correction cycles and a deep understanding of the underlying grammar formalisms. In addition, designing these rules frequently requires external resources such as Wordnet, and even involves correction of the existing treebank. This effort has to be repeated for each new grammar formalism, each new annotation scheme and each new language. In this paper, we propose an alternative approach for parsing constituency-based grammars. Instead of using manually-crafted transformation rules, this approach relies on a small amount of annotations in the target formalism. Frequently, such annotations are available in linguistic texts that introduce the formalism. For instance, a textbook on HPSG (Pollard and Sag, 1994) illustrates grammatical constructions using about 600 examples. While these examples are informative, they are not sufficient for training. To compensate for the annotation sparsity, our approach utilizes coarsely annotated data readily available in large quantities. A natural candidate for such coarse annotations is context-free grammar (CFG) from the Penn Treebank, while the target formalism can be any constituency-based grammars, such as Combinatory Categorial Grammar (CCG) (Steedman, 2001), Lexical Functional Grammar (LFG) (Bresnan, 1982) or Head-Driven Phrase Structure Grammar (HPSG) (Pollard and Sag, 1994). All of these formalisms share a similar basic syntactic structure with Penn Treebank CFG. However, the target formalisms also encode additional constraints and semantic features. For instance, Penn Treebank annotations do not make an explicit distinction between complement and adjunct, while all the above grammars mark these 291 roles explicitly. Moreover, even the identical syntactic information is encoded differently in these formalisms. An example of this phenomenon is the marking of subject. In LFG, this information is captured in the mapping equation, namely ↑SBJ =↓, while Penn Treebank represents it as a functional tag, such as NP-SBJ. Figure 1 shows derivations in the three target formalisms we consider, as well as a CFG derivation. We can see that the derivations of these formalisms share the same basic structure, while the formalism-specific information is mainly encoded in the lexical entries and node labels. To enable effective transfer the model has to identify shared structural components between the formalisms despite the apparent differences. Moreover, we do not assume parallel annotations. To this end, our model jointly parses the two corpora according to the corresponding annotations, enabling transfer via parameter sharing. In particular, we augment each target tree node with hidden variables that capture the connection to the coarse annotations. Specifically, each node in the target tree has two labels: an entry which is specific to the target formalism, and a latent label containing a value from the Penn Treebank tagset, such as NP (see Figure 2). This design enables us to represent three types of features: the target formalismspecific features, the coarse formalism features, and features that connect the two. This modeling approach makes it possible to perform transfer to a range of target formalisms, without manually drafting formalism-specific rules. We evaluate our approach on three constituency-based grammars — CCG, HPSG, and LFG. As a source of coarse annotations, we use the Penn Treebank.2 Our results clearly demonstrate that for all three formalisms, parsing accuracy can be improved by training with additional coarse annotations. For instance, the model trained on 500 HPSG sentences achieves labeled dependency F-score of 72.3%. Adding 15,000 Penn Treebank sentences during training leads to 78.5% labeled dependency F-score, an absolute improvement of 6.2%. To achieve similar performance in the absence of coarse annotations, the parser has to be trained on about 1,500 sentences, namely three times what is needed when using coarse annotations. Similar results are 2While the Penn Treebank-2 contains richer annotations, we decided to use the Penn Treebank-1 to demonstrate the feasibility of transfer from coarse annotations. CFG CCG LFG I eat apples NP VB NP VP S I eat apples NP (S[dcl]\NP)/NP NP S[dcl]\NP S[dcl] I eat apples [Pron.I] [ SBJ, OBJ] [N.3pl] ROOT ↑=↓ ↑ ↑ =↓ SBJ! ↑ =↓ OBJ! ↑ ↑=↓ HPSG I eat apples [N.no3sg] [N<V.bse>N] [N.3pl] head_comp subj_head Figure 1: Derivation trees for CFG as well as CCG, HPSG and LFG formalisms. also observed on CCG and LFG formalisms. 2 Related Work Our work belongs to a broader class of research on transfer learning in parsing. This area has garnered significant attention due to the expense associated with obtaining syntactic annotations. Transfer learning in parsing has been applied in different contexts, such as multilingual learning (Snyder et al., 2009; Hwa et al., 2005; McDonald et al., 2006; McDonald et al., 2011; Jiang and Liu, 2009), domain adaptation (McClosky et al., 2010; Dredze et al., 2007; Blitzer et al., 2006), and crossformalism transfer (Hockenmaier and Steedman, 2002; Miyao et al., 2005; Cahill et al., 2002; Riezler et al., 2002; Chen and Shanker, 2005; Candito et al., 2010). There have been several attempts to map annotations in coarse grammars like CFG to annotations in richer grammar, like HPSG, LFG, or CCG. Traditional approaches in this area typically rely on manually specified rules that encode the relation between the two formalisms. For instance, mappings may specify how to convert traces and functional tags in Penn Treebank to the f-structure in LFG (Cahill, 2004). These conversion rules are typically utilized in two ways: (1) to create a new treebank which is consequently used to train a parser for the target formalism (Hockenmaier and Steedman, 2002; Clark and Curran, 2003; Miyao et al., 2005; Miyao and Tsujii, 2008), (2) to translate the output of a CFG parser into the target formalism (Cahill et al., 2002). The design of these rules is a major linguistic and computational undertaking, which requires multiple iterations over the data to increase coverage (Miyao et al., 2005; Oepen et al., 2004). By nature, the mapping rules are formalism spe292 cific and therefore not transferable. Moreover, frequently designing such mappings involves modification to the original annotations. For instance, Hockenmaier and Steedman (2002) made thousands of POS and constituent modifications to the Penn Treebank to facilitate transfer to CCG. More importantly, in some transfer scenarios, deterministic rules are not sufficient, due to the high ambiguity inherent in the mapping. Therefore, our work considers an alternative set-up for crossformalism transfer where a small amount of annotations in the target formalism is used as an alternative to using deterministic rules. The limitation of deterministic transfer rules has been recognized in prior work (Riezler et al., 2002). Their method uses a hand-crafted LFG parser to create a set of multiple parsing candidates for a given sentence. Using the partial mapping from CFG to LFG as the guidance, the resulting trees are ranked based on their consistency with the labeled LFG bracketing imported from CFG. In contrast to this method, we neither require a parser for the target formalism, nor manual rules for partial mapping. Consequently, our method can be applied to many different target grammar formalisms without significant engineering effort for each one. The utility of coarse-grained treebanks is determined by the degree of structural overlap with the target formalism. 3 The Learning Problem Recall that our goal is to learn how to parse the target formalisms while using two annotated sources: a small set of sentences annotated in the target formalism (e.g., CCG), and a large set of sentences with coarse annotations. For the latter, we use the CFG parses from the Penn Treebank. For simplicity we focus on the CCG formalism in what follows. We also generalize our model to other formalisms, as explained in Section 5.4. Our notations are as follows: an input sentence is denoted by S. A CFG parse is denoted by yCFG and a CCG parse is denoted by yCCG. Clearly the set of possible values for yCFG and yCCG is determined by S and the grammar. The training set is a set of N sentences S1, . . . , SN with CFG parses y1 CFG, . . . , yN CFG, and M sentences ¯S1, . . . , ¯SM with CCG parses y1 CCG, . . . , yM CCG. It is important to note that we do not assume we have parallel data for CCG and CFG. Our goal is to use such a corpus for learning eat apples coarse feature on yCFG VP VP,NP VP (S[dcl]\NP)/NP VP S[dcl]\NP NP NP formalism feature on yCCG S[dcl]\NP (S[dcl]\NP)/NP,NP joint feature on yCFG, yCCG VP, S[dcl]\NP (VP, (S[dcl]\NP)/NP), (NP, NP) Figure 2: Illustration of the joint CCG-CFG representation. The shadowed labels correspond to the CFG derivation yCF G, whereas the other labels correspond to the CCG derivation yCCG. Note that the two derivations share the same (binarized) tree structure. Also shown are features that are turned on for this joint derivation (see Section 6). how to generate CCG parses to unseen sentences. 4 A Joint Model for Two Formalisms The key idea behind our work is to learn a joint distribution over CCG and CFG parses. Such a distribution can be marginalized to obtain a distribution over CCG or CFG and is thus appropriate when the training data is not parallel, as it is in our setting. It is not immediately clear how to jointly model the CCG and CFG parses, which are structurally quite different. Furthermore, a joint distribution over these will become difficult to handle computationally if not constructed carefully. To address this difficulty, we make several simplifying assumptions. First, we assume that both parses are given in normal form, i.e., they correspond to binary derivation trees. CCG parses are already provided in this form in CCGBank. CFG parses in the Penn Treebank are not binary, and we therefore binarize them, as explained in Section 5.3. Second, we assume that any yCFG and yCCG jointly generated must share the same derivation tree structure. This makes sense. Since both formalisms are constituency-based, their trees are expected to describe the same constituents. We denote the set of valid CFG and CCG joint parses for sentence S by Y(S). The above two simplifying assumptions make it easy to define joint features on the two parses, as explained in Section 6. The representation and features are illustrated in Figure 2. We shall work within the discriminative framework, where given a sentence we model a distribution over parses. As is standard in such settings, the distribution will be log-linear in a set of features of these parses. Denoting y = (yCFG, yCCG), we seek to model the distribution 293 p(y|S) corresponding to the probability of generating a pair of parses (CFG and CCG) given a sentence. The distribution thus has the following form: pjoint(y|S; θ) = 1 Z(S; θ)ef(y,S)·θ . (1) where θ is a vector of parameters to be learned from data, and f(y, S) is a feature vector. Z(S; θ) is a normalization (partition) function normalized over y ∈Y(S) the set of valid joint parses. The feature vector contains three types of features: CFG specific, CCG specific and joint CFGCCG. We denote these by fCFG, fCCG, fjoint. These depend on yCCG, yCFG and y respectively. Accordingly, the parameter vector θ is a concatenation of θCCG, θCFG and θjoint. As mentioned above, we can use Equation 1 to obtain distributions over yCCG and yCFG via marginalization. For the distribution over yCCG we do precisely this, namely use: pCCG(yCCG|S; θ) = X yCF G pjoint(y|S; θ) (2) For the distribution over yCFG we could have marginalized pjoint over yCCG. However, this computation is costly for each sentence, and has to be repeated for all the sentences. Instead, we assume that the distribution over yCFG is a loglinear model with parameters θCFG (i.e., a subvector of θ) , namely: pCFG(yCFG|S; θCFG) = efCF G(yCF G,S)·θCF G Z(S; θCFG) . (3) Thus, we assume that both pjoint and pCFG have the same dependence on the fCFG features. The Likelihood Objective: Given the models above, it is natural to use maximum likelihood to find the optimal parameters. To do this, we define the following regularized likelihood function: L(θ) = N X i=1 log pCFG(yi CFG|Si, θCFG)  + M X i=1 log pCCG(yi CCG| ¯Si, θ)  −λ 2 ∥θ∥2 2 where pCCG and pCFG are defined in Equations 2 and 3 respectively. The last term is the l2-norm regularization. Our goal is then to find a θ that maximizes L(θ). Training Algorithm: For maximizing L(θ) w.r.t. θ we use the limited-memory BFGS algorithm (Nocedal and Wright, 1999). Calculating the gradient of L(θ) requires evaluating the expected values of f(y, S) and fCFG under the distributions pjoint and pCFG respectively. This can be done via the inside-outside algorithm.3 Parsing Using the Model: To parse a sentence S, we calculate the maximum probability assignment for pjoint(y|S; θ).4 The result is both a CFG and a CCG parse. Here we will mostly be interested in the CCG parse. The joint parse with maximum probability is found using a standard CYK chart parsing algorithm. The chart construction will be explained in Section 5. 5 Implementation This section introduces important implementation details, including supertagging, feature forest pruning and binarization methods. Finally, we explain how to generalize our model to other constituency-based formalisms. 5.1 Supertagging When parsing a target formalism tree, one needs to associate each word with a lexical entry. However, since the number of candidates is typically more than one thousand, the size of the chart explodes. One effective way of reducing the number of candidates is via supertagging (Clark and Curran, 2007). A supertagger is used for selecting a small set of lexical entry candidates for each word in the sentence. We use the tagger in (Clark and Curran, 2007) as a general suppertagger for all the grammars considered. The only difference is that we use different lexical entries in different grammars. 5.2 Feature Forest Pruning In the BFGS algorithm (see Section 4), feature expectation is computed using the inside-outside algorithm. To perform this dynamic programming efficiently, we first need to build the packed chart, namely the feature forest (Miyao, 2006) to represent the exponential number of all possible tree 3To speed up the implementation, gradient computation is parallelized, using the Message Passing Interface package (Gropp et al., 1999). 4An alternative approach would be to marginalize over yCF G and maximize over yCCG. However, this is a harder computational problem. 294 structures. However, a common problem for lexicalized grammars is that the forest size is too large. In CFG, the forest is pruned according to the inside probability of a simple generative PCFG model and a prior (Collins, 2003). The basic idea is to prune the trees with lower probability. For the target formalism, a common practice is to prune the forest using the supertagger (Clark and Curran, 2007; Miyao, 2006). In our implementation, we applied all pruning techniques, because the forest is a combination of CFG and target grammar formalisms (e.g., CCG or HPSG). 5.3 Binarization We assume that the derivation tree in the target formalism is in a normal form, which is indeed the case for the treebanks we consider. As mentioned in Section 4, we would also like to work with binarized CFG derivations, such that all trees are in normal form and it is easy to construct features that link the two (see Section 6). Since Penn Treebank trees are not binarized, we construct a simple procedure for binarizing them. The procedure is based on the available target formalism parses in the training corpus, which are binarized. We illustrate it with an example. In what follows, we describe derivations using the POS of the head words of the corresponding node in the tree. This makes it possible to transfer binarization rules between formalisms. Suppose we want to learn the binarization rule of the following derivation in CFG: NN →(DT JJ NN) (4) We now look for binary derivations with these POS in the target formalism corpus, and take the most common binarization form. For example, we may find that the most common binarization to binarize the CFG derivation in Equation 4 is: NN →(DT (JJ NN)) If no (DT JJ NN) structure is observed in the CCG corpus, we first apply the binary branching on the children to the left of the head, and then on the children to the right of the head. We also experiment with using fixed binarization rules such as left/right branching, instead of learning them. This results in a drop on the dependency F-score by about 5%. 5.4 Implementation in Other Formalisms We introduce our model in the context of CCG, but the model can easily be generalized to other constituency-based grammars, such as HPSG and LFG. In a derivation tree, the formalism-specific information is mainly encoded in the lexical entries and the applied grammar rules, rather than the tree structures. Therefore we only need to change the node labels and lexical entries to the languagespecific ones, while the framework of the model remains the same. 6 Features Feature functions in log-linear models are designed to capture the characteristics of each derivation in the tree. In our model, as mentioned in Section 1, the features are also defined to enable information transfer between coarse and rich formalisms. In this section, we first introduce how different types of feature templates are designed, and then show an example of how the features help transfer the syntactic structure information. Note that the same feature templates are used for all the target grammar formalisms. Recall that our y contains both the CFG and CCG parses, and that these use the same derivation tree structure. Each feature will consider either the CFG derivation, the CCG derivation or these two derivations jointly. The feature construction is similar to constructions used in previous work (Miyao, 2006). The features are based on the atomic features listed in Table 1. These will be used to construct f(y, S) as explained next. hl lexical entries/CCG categories of the head word r grammar rules, i.e. HPSG schema, resulting CCG categories, LFG mapping equations sy CFG syntactic label of the node (e.g. NP, VP) d distance between the head words of the children c whether a comma exists between the head words of the children sp the span of the subtree rooted at the node hw surface form of the head word of the node hp part-of-speech of the head word pi part-of-speech of the i-th word in the sentence Table 1: Templates of atomic features. We define the following feature templates: fbinary for binary derivations, funary for unary derivations, and froot for the root nodes. These use the atomic features in Table 1, resulting in the 295 following templates: fbinary = * r, syp, d, c syl, spl, hwl, hpl, hll, syr, spr, hwr, hpr, hlr, pst−1, pst−2, pen+1, pen+2 + funary = ⟨r, syp, hw, hp, hl⟩ froot = ⟨sy, hw, hp, hl⟩ In the above we used the following notation: p, l, r denote the parent node and left/right child node, and st, en denote the starting and ending index of the constituent. We also consider templates with subsets of the above features. The final list of binary feature templates is shown in Table 2. It can be seen that some features depend only on the CFG derivations (i.e., those without r,hl), and are thus in fCFG(y, S). Others depend only on CCG derivations (i.e., those without sy), and are in fCCG(y, S). The rest depend on both CCG and CFG and are thus in fjoint(y, S). Note that after binarization, grandparent and sibling information becomes very important in encoding the structure. However, we limit the features to be designed locally in a derivation in order to run inside-outside efficiently. Therefore we use the preceding and succeeding POS tag information to approximate the grandparent and sibling information. Empirically, these features yield a significant improvement on the constituent accuracy. fCF G ⟨d, wl,r, hpl,r, syp,l,r⟩, ⟨d, wl,r, syp,l,r⟩, ⟨c, wl,r, hpl,r, syp,l,r⟩, ⟨c, wl,r, syp,l,r⟩, ⟨d, c, hpl,r, syp,l,r⟩, ⟨d, c, syp,l,r⟩, ⟨c, spl,r, hpl,r, syp,l,r⟩, ⟨c, spl,r, syp,l,r⟩, ⟨pst−1, syp,l,r⟩, ⟨pen+1, syp,l,r⟩, ⟨pst−1, pen+1, syp,l,r⟩, ⟨pst−1, pst−2, syp,l,r⟩, ⟨pen+1, pen+2, syp,l,r⟩, ⟨pst−1, pst−2, pen+1, pen+2, syp,l,r⟩, fCCG ⟨r, d, c, hwl,r, hpl,r, hll,r⟩, ⟨r, d, c, hwl,r, hpl,r⟩ ⟨r, d, c, hwl,r, hll,r⟩, ⟨r, c, spl,r, hwl,r, hpl,r, hll,r⟩ ⟨r, c, spl,r, hwl,r, hpl,r, ⟩, ⟨r, c, spl,r, hwl,r, hll,r⟩ ⟨r, d, c, hpl,r, hll,r⟩, ⟨r, d, c, hpl,r⟩, ⟨r, d, c, hll,r⟩ ⟨r, c, hpl,r, hll,r⟩, ⟨r, c, hpl,r⟩, ⟨r, c, hll,r⟩ fjoint ⟨r, d, c, syl,r, hll,r⟩, ⟨r, d, c, syl,r⟩ ⟨r, c, spl,r, syl,r, hll,r⟩, ⟨r, c, spl,r, syl,r⟩ Table 2: Binary feature templates used in f(y, S). Unary and root features follow a similar pattern. In order to apply the same feature templates to other target formalisms, we only need to assign the atomic features r and hl with the formalismspecific values. We do not need extra engineering work on redesigning the feature templates. eat apples VP (S[dcl]\NP)/NP VP S[dcl]\NP NP NP VP VP,NP S[dcl]\NP (S[dcl]\NP)/NP,NP VP, S[dcl]\NP (VP, (S[dcl]\NP)/NP), (NP, NP) CCGbank VP Penn Treebank VP NP write letters VP VP,NP fCFG(y,S): fCFG(y,S): fCCG(y,S): f joint(y,S): Figure 3: Example of transfer between CFG and CCG formalisms. Figure 3 gives an example in CCG of how features help transfer the syntactic information from Penn Treebank and learn the correspondence to the formalism-specific information. From the Penn Treebank CFG annotations, we can learn that the derivation VP→(VP, NP) is common, as shown on the left of Figure 3. In a CCG tree, this tendency will encourage the yCFG (latent) variables to take this CFG parse. Then weights on the fjoint features will be learned to model the connection between the CFG and CCG labels. Moreover, the formalism-specific features fCCG can also encode the formalism-specific syntactic and semantic information. These three types of features work together to generate a tree skeleton and fill in the CFG and CCG labels. 7 Evaluation Setup Grammar Train Dev. Test CCG Sec. 02-21 Sec. 00 Sec. 23 HPSG LFG 140 sents. in 560 sents. in PARC700 PARC700 Table 3: Training/Dev./Test split on WSJ sections and PARC700 for different grammar formalisms. Datasets: As a source of coarse annotations, we use the Penn Treebank-1 (Marcus et al., 1993). In addition, for CCG, HPSG and LFG, we rely on formalism-specific corpora developed in prior research (Hockenmaier and Steedman, 2002; Miyao et al., 2005; Cahill et al., 2002; King et al., 2003). All of these corpora were derived via conversion of Penn Treebank to the target formalisms. In particular, our CCG and HPSG datasets were converted from the Penn Treebank based on hand296 0 1000 3000 7000 11000 15000 74 76 78 80 82 84 86 88 Labeled Dep Unlabeled Dep Unlabeled Parseval (a) CCG 0 1000 3000 7000 11000 15000 72 74 76 78 80 82 84 86 Labeled Dep Unlabeled Dep Unlabeled Parseval (b) HPSG 0 1000 3000 7000 11000 15000 65 70 75 80 Labeled Dep Unlabeled Dep Unlabeled Parseval (c) LFG Figure 4: Model performance with 500 target formalism trees and different numbers of CFG trees, evaluated using labeled/unlabeled dependency F-score and unlabeled PARSEVAL. crafted rules (Hockenmaier and Steedman, 2002; Miyao et al., 2005). Table 3 shows which sections of the treebanks were used in training, testing and development for both formalisms. Our LFG training dataset was constructed in a similar fashion (Cahill et al., 2002). However, we choose to use PARC700 as our LFG tesing and development datasets, following the previous work by (Kaplan et al., 2004). It contains 700 manually annotated sentences that are randomly selected from Penn Treebank Section 23. The split of PARC700 follows the setting in (Kaplan et al., 2004). Since our model does not assume parallel data, we use distinct sentences in the source and target treebanks. Following previous work (Hockenmaier, 2003; Miyao and Tsujii, 2008), we only consider sentences not exceeding 40 words, except on PARC700 where all sentences are used. Evaluation Metrics: We use two evaluation metrics. First, following previous work, we evaluate our method using the labeled and unlabeled predicate-argument dependency F-score. This metric is commonly used to measure parsing quality for the formalisms considered in this paper. The detailed definition of this measure as applied for each formalism is provided in (Clark and Curran, 2003; Miyao and Tsujii, 2008; Cahill et al., 2004). For CCG, we use the evaluation script from the C&C tools.5 For HPSG, we evaluate all types of dependencies, including punctuations. For LFG, we consider the preds-only dependencies, which are the dependencies between pairs of words. Secondly, we also evaluate using unlabeled PARSEVAL, a standard measure for PCFG parsing (Petrov and Klein, 2007; Charniak and Johnson, 2005; Charniak, 2000; Collins, 1997). The dependency F-score captures both the target5Available at http://svn.ask.it.usyd.edu.au/trac/candc/wiki grammar labels and tree-structural relations. The unlabeled PARSEVAL is used as an auxiliary measure that enables us to separate these two aspects by focusing on the structural relations exclusively. Training without CFG Data: To assess the impact of coarse data in the experiments below, we also consider the model trained only on formalism-specific annotations. When no CFG sentences are available, we assign all the CFG labels to a special value shared by all the nodes. In this set-up, the model reduces to a normal loglinear model for the target formalism. Parameter Settings: During training, all the feature parameters θ are initialized to zero. The hyperparameters used in the model are tuned on the development sets. We noticed, however, that the resulting values are consistent across different formalisms. In particular, we set the l2-norm weight to λ = 1.0, the supertagger threshold to β = 0.01, and the PCFG pruning threshold to α = 0.002. 8 Experiment and Analysis Impact of Coarse Annotations on Target Formalism: To analyze the effectiveness of annotation transfer, we fix the number of annotated trees in the target formalism and vary the amount of coarse annotations available to the algorithm during training. In particular, we use 500 sentences with formalism-specific annotations, and vary the number of CFG trees from zero to 15,000. As Figure 4 shows, CFG data boosts parsing accuracy for all the target formalisms. For instance, there is a gain of 6.2% in labeled dependency F-score for HPSG formalism when 15,000 CFG trees are used. Moreover, increasing the number of coarse annotations used in training leads to further improvement on different evaluation metrics. 297 0 1000 2000 3000 4000 5000 6000 74 75 76 77 78 79 80 81 82 83 84 w/o CFG 15000 CFG (a) CCG 0 1000 2000 3000 4000 5000 6000 72 74 76 78 80 82 84 w/o CFG 15000 CFG (b) HPSG 0 1000 2000 3000 4000 5000 6000 66 68 70 72 74 76 78 w/o CFG 15000 CFG (c) LFG 0 1000 2000 3000 4000 5000 6000 77 78 79 80 81 82 83 84 85 86 87 88 w/o CFG 15000 CFG (d) CCG 0 1000 2000 3000 4000 5000 6000 70 72 74 76 78 80 82 84 86 w/o CFG 15000 CFG (e) HPSG 0 1000 2000 3000 4000 5000 6000 68 70 72 74 76 78 80 82 84 w/o CFG 15000 CFG (f) LFG Figure 5: Model performance with different target formalism trees and zero or 15,000 CFG trees. The first row shows the results of labeled dependency F-score and the second row shows the results of unlabeled PARSEVAL. Tradeoff between Target and Coarse Annotations: We also assess the relative contribution of coarse annotations when the size of annotated training corpus in the target formalism varies. In this set of experiments, we fix the number of CFG trees to 15,000 and vary the number of target annotations from 500 to 4,000. Figure 5 shows the relative contribution of formalism-specific annotations compared to that of the coarse annotations. For instance, Figure 5a shows that the parsing performance achieved using 2,000 CCG sentences can be achieved using approximately 500 CCG sentences when coarse annotations are available for training. More generally, the result convincingly demonstrates that coarse annotations are helpful for all the sizes of formalism-specific training data. As expected, the improvement margin decreases when more formalism-specific data is used. Figure 5 also illustrates a slightly different characteristics of transfer performance between two evaluation metrics. Across all three grammars, we can observe that adding CFG data has a more pronounced effect on the PARSEVAL measure than the dependency F-score. This phenomenon can be explained as follows. The unlabeled PARSEVAL score (Figure 5d-f) mainly relies on the coarse structural information. On the other hand, predicate-argument dependency Fscore (Figure 5a-c) also relies on the target grammar information. Because that our model only transfers structural information from the source treebank, the gains of PARSEVAL are expected to be larger than that of dependency F-score. Grammar Parser # Grammar trees 1,000 15,000 CCG C&C 74.1 / 83.4 82.6 / 90.1 Model 76.8 / 85.5 84.7 / 90.9 HPSG Enju 75.8 / 80.6 84.2 / 87.3 Model 76.9 / 82.0 84.9 / 88.3 LFG Pipeline Annotator 68.5 / 74.0 82.6 / 85.9 Model 69.8 / 76.6 81.1 / 84.7 Table 4: The labeled/unlabeled dependency Fscore comparisons between our model and stateof-the-art parsers. Comparison to State-of-the-art Parsers: We would also like to demonstrate that the above gains of our transfer model are achieved using an adequate formalism-specific parser. Since our model can be trained exclusively on formalismspecific data, we can compare it to state-of-theart formalism-specific parsers. For this experiment, we choose the C&C parser (Clark and Curran, 2003) for CCG, Enju parser (Miyao and Tsujii, 2008) for HPSG and pipeline automatic annotator (Cahill et al., 2004) with Charniak parser for LFG. For all three parsers, we use the implementation provided by the authors with the default parameter values. All the models are trained on either 1,000 or 15,000 sentences annotated with formalism-specific trees, thus evaluating their performances on small scale or large scale of data. As Table 4 shows, our model is competitive with 298 all the baselines described above. It’s not surprising that Cahill’s model outperforms our loglinear model because it relies heavily on handcrafted rules optimized for the dataset. Correspondence between CFG and Target Formalisms: Finally, we analyze highly weighted features. Table 5 shows such features for HPSG; similar patterns are also found for the other grammar formalisms. The first two features are formalism-specific ones, the first for HPSG and the second for CFG. They show that we correctly learn a frequent derivation in the target formalism and CFG. The third one shows an example of a connection between CFG and the target formalism. Our model correctly learns that a syntactic derivation with children VP and NP is very likely to be mapped to the derivation (head comp)→ ([N⟨V⟩N],[N.3sg]) in HPSG. Feature type Features with high weight Target formalism Template (r) →(hll, hpl)(hlr, pr) Examples (head comp)→ ([N⟨V⟩N],VB)([N.3sg],NN) Coarse formalism Template (syp) →(syl, hpl)(syr, hpr) Examples (VP)→(VP,VB)(NP,NN) Joint features Template (r) →(hll, syl)(ler, syr) Examples (head comp)→ ([N⟨V⟩N],VP)([N.3sg],NP) Table 5: Example features with high weight. 9 Conclusions We present a method for cross-formalism transfer in parsing. Our model utilizes coarse syntactic annotations to supplement a small number of formalism-specific trees for training on constituency-based grammars. Our experimental results show that across a range of such formalisms, the model significantly benefits from the coarse annotations. Acknowledgments The authors acknowledge the support of the Army Research Office (grant 1130128-258552). We thank Yusuke Miyao, Ozlem Cetinoglu, Stephen Clark, Michael Auli and Yue Zhang for answering questions and sharing the codes of their work. We also thank the members of the MIT NLP group and the ACL reviewers for their suggestions and comments. Any opinions, findings, conclusions, or recommendations expressed in this paper are those of the authors, and do not necessarily reflect the views of the funding organizations. References John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain adaptation with structural correspondence learning. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, pages 120–128. Association for Computational Linguistics. Joan Bresnan. 1982. The mental representation of grammatical relations, volume 1. The MIT Press. Aoife Cahill, Mairad McCarthy, Josef van Genabith, and Andy Way. 2002. Parsing with pcfgs and automatic f-structure annotation. In Proceedings of the Seventh International Conference on LFG, pages 76–95. CSLI Publications. Aoife Cahill, Michael Burke, Ruth O’Donovan, Josef Van Genabith, and Andy Way. 2004. Long-distance dependency resolution in automatically acquired wide-coverage pcfg-based lfg approximations. In Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics, page 319. Association for Computational Linguistics. Aoife Cahill. 2004. Parsing with Automatically Acquired, Wide-Coverage, Robust, Probabilistic LFG Approximation. Ph.D. thesis. Marie Candito, Benoˆıt Crabb´e, Pascal Denis, et al. 2010. Statistical french dependency parsing: treebank conversion and first results. In Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC 2010), pages 1840–1847. Eugene Charniak and Mark Johnson. 2005. Coarseto-fine n-best parsing and maxent discriminative reranking. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 173–180. Association for Computational Linguistics. Eugene Charniak. 2000. A maximum-entropyinspired parser. In Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference, pages 132–139. John Chen and Vijay K Shanker. 2005. Automated extraction of tags from the penn treebank. New developments in parsing technology, pages 73–89. Stephen Clark and James R Curran. 2003. Log-linear models for wide-coverage ccg parsing. In Proceedings of the 2003 conference on Empirical methods in natural language processing, pages 97–104. Association for Computational Linguistics. 299 Stephen Clark and James R Curran. 2007. Widecoverage efficient statistical parsing with ccg and log-linear models. Computational Linguistics, 33(4):493–552. Michael Collins. 1997. Three generative, lexicalised models for statistical pprsing. In Proceedings of the eighth conference on European chapter of the Association for Computational Linguistics, pages 16–23. Association for Computational Linguistics. Michael Collins. 2003. Head-driven statistical models for natural language parsing. Computational linguistics, 29(4):589–637. Mark Dredze, John Blitzer, Partha Pratim Talukdar, Kuzman Ganchev, Joao V Grac¸a, and Fernando Pereira. 2007. Frustratingly hard domain adaptation for dependency parsing. In Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL, volume 2007. William Gropp, Ewing Lusk, and Anthony Skjellum. 1999. Using MPI: portable parallel programming with the message passing interface, volume 1. MIT press. Julia Hockenmaier and Mark Steedman. 2002. Acquiring compact lexicalized grammars from a cleaner treebank. In Proceedings of the Third LREC Conference, pages 1974–1981. Julia Hockenmaier. 2003. Data and models for statistical parsing with combinatory categorial grammar. Rebecca Hwa, Philip Resnik, and Amy Weinberg. 2005. Breaking the resource bottleneck for multilingual parsing. Technical report, DTIC Document. Wenbin Jiang and Qun Liu. 2009. Automatic adaptation of annotation standards for dependency parsing: using projected treebank as source corpus. In Proceedings of the 11th International Conference on Parsing Technologies, pages 25–28. Association for Computational Linguistics. Ronald M. Kaplan, Stefan Riezler, Tracy H. King, John T. Maxwell III, Alexander Vasserman, and Richard Crouch. 2004. Speed and accuracy in shallow and deep stochastic parsing. In Proceedings of NAACL. Tracy Holloway King, Richard Crouch, Stefan Riezler, Mary Dalrymple, and Ronald M Kaplan. 2003. The parc 700 dependency bank. In Proceedings of the EACL03: 4th International Workshop on Linguistically Interpreted Corpora (LINC-03), pages 1–8. Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313–330. David McClosky, Eugene Charniak, and Mark Johnson. 2010. Automatic domain adaptation for parsing. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 28–36. Association for Computational Linguistics. Ryan McDonald, Kevin Lerman, and Fernando Pereira. 2006. Multilingual dependency analysis with a twostage discriminative parser. In Proceedings of the Tenth Conference on Computational Natural Language Learning, pages 216–220. Association for Computational Linguistics. Ryan McDonald, Slav Petrov, and Keith Hall. 2011. Multi-source transfer of delexicalized dependency parsers. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 62–72. Association for Computational Linguistics. Yusuke Miyao and Jun’ichi Tsujii. 2008. Feature forest models for probabilistic hpsg parsing. Computational Linguistics, 34(1):35–80. Yusuke Miyao, Takashi Ninomiya, and Junichi Tsujii. 2005. Corpus-oriented grammar development for acquiring a head-driven phrase structure grammar from the penn treebank. Natural Language Processing–IJCNLP 2004, pages 684–693. Yusuke Miyao. 2006. From Linguistic Theory to Syntactic Analysis: Corpus-Oriented Grammar Development and Feature Forest Model. Ph.D. thesis. Jorge Nocedal and Stephen J Wright. 1999. Numerical optimization. Springer verlag. Stephan Oepen, Dan Flickinger, and Francis Bond. 2004. Towards holistic grammar engineering and testing–grafting treebank maintenance into the grammar revision cycle. In Proceedings of the IJCNLP workshop beyond shallow analysis. Citeseer. Slav Petrov and Dan Klein. 2007. Improved inference for unlexicalized parsing. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics, pages 404–411. Carl Pollard and Ivan A Sag. 1994. Head-driven phrase structure grammar. University of Chicago Press. Stefan Riezler, Tracy H King, Ronald M Kaplan, Richard Crouch, John T Maxwell III, and Mark Johnson. 2002. Parsing the wall street journal using a lexical-functional grammar and discriminative estimation techniques. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 271–278. Association for Computational Linguistics. Benjamin Snyder, Tahira Naseem, and Regina Barzilay. 2009. Unsupervised multilingual grammar induction. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 300 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1-Volume 1, pages 73–81. Association for Computational Linguistics. Mark Steedman. 2001. The syntactic process. MIT press. Yue Zhang, Stephen Clark, et al. 2011. Shift-reduce ccg parsing. In Proceedings of the 49th Meeting of the Association for Computational Linguistics, pages 683–692. 301
2013
29
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 22–31, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Training Nondeficient Variants of IBM-3 and IBM-4 for Word Alignment Thomas Schoenemann Heinrich-Heine-Universit¨at D¨usseldorf, Germany Universit¨atsstr. 1 40225 D¨usseldorf, Germany Abstract We derive variants of the fertility based models IBM-3 and IBM-4 that, while maintaining their zero and first order parameters, are nondeficient. Subsequently, we proceed to derive a method to compute a likely alignment and its neighbors as well as give a solution of EM training. The arising M-step energies are non-trivial and handled via projected gradient ascent. Our evaluation on gold alignments shows substantial improvements (in weighted Fmeasure) for the IBM-3. For the IBM4 there are no consistent improvements. Training the nondeficient IBM-5 in the regular way gives surprisingly good results. Using the resulting alignments for phrasebased translation systems offers no clear insights w.r.t. BLEU scores. 1 Introduction While most people think of the translation and word alignment models IBM-3 and IBM-4 as inherently deficient models (i.e. models that assign non-zero probability mass to impossible events), in this paper we derive nondeficient variants maintaining their zero order (IBM-3) and first order (IBM-4) parameters. This is possible as IBM-3 and IBM-4 are very special cases of general loglinear models: they are properly derived by the chain rule of probabilities. Deficiency is only introduced by ignoring a part of the history to be conditioned on in the individual factors of the chain rule factorization. While at first glance this seems necessary to obtain zero and first order deFigure 1: Plot of the negative log. likelihoods (the quantity to be minimized) arising in training deficient and nondeficient models (for Europarl German | English, training scheme 15H53545). 1/3/4=IBM-1/3/4, H=HMM, T=Transfer iteration. The curves are identical up to iteration 11. Iteration 11 shows that merely 5.14% of the (HMM) probability mass are covered by the Viterbi alignment and its neighbors. With deficient models (and deficient empty words) the final negative log likelihood is higher than the initial HMM one, with nondeficient models it is lower than for the HMM, as it should be for a better model. pendencies, we show that with proper renormalization all factors can be made nondeficient. Having introduced the model variants, we proceed to derive a hillclimbing method to compute a likely alignment (ideally the Viterbi alignment) and its neighbors. As for the deficient models, this plays an important role in the E-step of the subsequently derived expectation maximization (EM) training scheme. As usual, expectations in EM are approximated, but we now also get non-trivial Mstep energies. We deal with these via projected gradient ascent. 22 The downside of our method is its resource consumption, but still we present results on corpora with 100.000 sentence pairs. The source code of this project is available in our word alignment software RegAligner1, version 1.2 and later. Figure 1 gives a first demonstration of how much the proposed variants differ from the standard models by visualizing the resulting negative log likelihoods2, the quantity to be minimized in EM-training. The nondeficient IBM-4 derives a lower negative log likelihood than the HMM, the regular deficient variant only a lower one than the IBM-1. As an aside, the transfer iteration from HMM to IBM3 (iteration 11) reveals that only 5.14% of the probability mass3 are preserved when using the Viterbi alignment and its neighbors instead of all alignments. Indeed, it is widely recognized that – with proper initialization – fertility based models outperform sequence based ones. In particular, sequence based models can simply ignore a part of the sentence to be conditioned on, while fertility based models explicitly factor in a probability of words in this sentence to have no aligned words (or any other number of aligned words, called the fertility). Hence, it is encouraging to see that the nondeficient IBM-4 indeed derives a higher likelihood than the sequence based HMM. Related Work Today’s most widely used models for word alignment are still the models IBM 1-5 of Brown et al. (1993) and the HMM of Vogel et al. (1996), thoroughly evaluated in (Och and Ney, 2003). While it is known that fertilitybased models outperform sequence-based ones, the large bulk of word alignment literature following these publications has mostly ignored fertilitybased models. This is different in the present paper which deals exclusively with such models. One reason for the lack of interest is surely that computing expectations and Viterbi alignments for these models is a hard problem (Udupa and Maji, 2006). Nevertheless, computing Viterbi align1https://github.com/Thomas1205/RegAligner, for the reported results we used a slightly earlier version. 2Note that the figure slightly favors IBM-1 and HMM as for them the length J of the foreign sequence is assumed to be known whereas IBM-3 and IBM-4 explicitly predict it. 3This number regards the corpus probability as in (9) to the power of 1/S, i.e. the objective function in maximum likelihood training. The number is not entirely fair as alignments where more than half the words align to the empty word are assigned a probability of 0. Still, this is an issue only for short sentences. ments for the IBM-3 has been shown to often be practicable (Ravi and Knight, 2010; Schoenemann, 2010). Much work has been spent on HMM-based formulations, focusing on the computationally tractable side (Toutanova et al., 2002; Sumita et al., 2004; Deng and Byrne, 2005). In addition, some rather complex models have been proposed that usually aim to replace the fertility based models (Wang and Waibel, 1998; Fraser and Marcu, 2007a). Another line of models (Melamed, 2000; Marcu and Wong, 2002; Cromi`eres and Kurohashi, 2009) focuses on joint probabilities to get around the garbage collection effect (i.e. that for conditional models, rare words in the given language align to too many words in the predicted language). The downside is that these models are computationally harder to handle. A more recent line of work introduces various forms of regularity terms, often in the form of symmetrization (Liang et al., 2006; Grac¸a et al., 2010; Bansal et al., 2011) and recently by using L0 norms (Vaswani et al., 2012). 2 The models IBM-3, IBM-4 and IBM-5 We begin with a short review of fertility-based models in general and IBM-3, IBM-4 and IBM5 specifically. All are due to (Brown et al., 1993) who proposed to use the deficient models IBM-3 and IBM-4 to initialize the nondeficient IBM-5. For a foreign sentence f = fJ 1 = (f1, . . . , fJ) with J words and an English one e = eI 1 = (e1, . . . , eI) with I words, the (conditional) probability p(fJ 1 |eI 1) of getting the foreign sentence as a translation of the English one is modeled by introducing the word alignment a as a hidden variable: p(fJ 1 |eI 1) = X a p(fJ 1 , a|eI 1) All IBM models restrict the space of alignments to those where a foreign word can align to at most one target word. The resulting alignment is then written as a vector aJ 1 , where each aj takes integral values between 0 and I, with 0 indicating that fj has no English correspondence. The fertility-based models IBM-3, IBM-4 and IBM-5 factor the (conditional) probability p(fJ 1 , aJ 1 |eI 1) of obtaining an alignment and a translation given an English sentence according to the following generative story: 23 1. For i = 1, 2, . . . , I, decide on the number Φi of foreign words aligned to ei. This number is called the fertility of ei. Choose with probability p(Φi|eI 1, Φi−1 1 ) = p(Φi|ei). 2. Choose the number Φ0 of unaligned words in the (still unknown) foreign sequence. Choose with probability p(Φ0|eI 1, ΦI 1) = p(Φ0| PI i=1 Φi). Since each foreign word belongs to exactly one English position (including 0), the foreign sequence is now known to be of length J = PI i=0 Φi. 3. For each i = 1, 2, . . . , I, and k = 1, . . . , Φi decide on (a) the identity fi,k of the next foreign word aligned to ei. Choose with probability p(fi,k|eI 1, ΦI 0, di−1 1 , di,1, . . . , di,k−1, fi,k) = p(fi,k|ei), where di comprises all di,k for word i (see point b) below) and fi,k comprises all foreign words known at that point. (b) the position di,k of the just generated foreign word fi,k, with probability p(di,k|eI 1, ΦI 0, di−1 1 , di,1, . . . , di,k−1, fi,k, fi,k) = p(di,k|ei, di−1 1 , di,1, . . . , di,k−1, fi,k, J). 4. The remaining Φ0 open positions in the foreign sequence align to position 0. Decide on the corresponding foreign words with p(fd0,k|e0), where e0 is an artificial “empty word”. To model the probability for the number of unaligned words in step 2, each of the PI i=1 Φi properly aligned foreign words generates an unaligned foreign word with probability p0, resulting in p  Φ0 I X i=1 Φi  =    IP i=1 Φi Φ0   pΦi 0 (1−p0)(P i Φi)−Φ0, with a base probability p0 and the combinatorial coefficients  n k  = n! k!(n−k)!, where n! = Qn k=1 k denotes the factorial of n. The main difference between IBM-3, IBM-4 and IBM-5 is the choice of probability model in step 3 b), called a distortion model. The choices are now detailed. 2.1 IBM-3 The IBM-3 implements a zero order distortion model, resulting in p(di,k|i, J) . Since most of the context to be conditioned on is ignored, this allows invalid configurations to occur with non-zero probability: some foreign positions can be chosen several times, while others remain empty. One says that the model is deficient. On the other hand, the model for p(Φ0| PI i=1 Φi) is nondeficient, and in training this often results in very high probabilities p0. To prevent this it is common to make this model deficient as well (Och and Ney, 2003), which improves performance immensely and gives much better results than simply fixing p0 in the original model. As for each i the di,k can appear in any order (i.e. need not be in ascending order), there are QI i=1 Φi! ways to generate the same alignment aJ 1 (where the Φi are the fertilities induced by aJ 1 ). In total, the IBM-3 has the following probability model: p(fJ 1 , aJ 1 |eI 1) = J Y j=1 h p(fj|eaj) · p(j|aj, J) i (1) · p  Φ0| I X i=1 Φi  · IY i=1 Φi! p(Φi|ei) . Reducing the Number of Parameters While using non-parametric models p(j|i, J) is convenient for closed-form M-steps in EM training, these parameters are not very intuitive. Instead, in this paper we use the parametric model p(j|i, J) = p(j|i) PJ j=1 p(j|i) (2) with the more intuitive parameters p(j|i). The arising M-step energy is addressed by projected gradient ascent (see below). These parameters are also used for the nondeficient variants. Using the original non-parametric ones can be handled in a very similar manner to the methods set forth below. 2.2 IBM-4 The distortion model of the IBM-4 is a first order one that generates the di,k of each English position i in ascending order (i.e. for 1 < k ≤Φi we have di,k > di,k−1). There is then a one-to-one correspondence between alignments aJ 1 and (valid) distortion parameters (di,k)i=1,...,I, k=1,...,Φi and therefore no longer a factor of QI i=1 Φi! . The IBM-4 has two sub-distortion models, one for the first aligned word (k = 1) of an English position and one for all following words (k > 1, only 24 if Φi > 1). For position i, let [i]=arg max{i′|1≤ i′ < i, Φi′ > 0} denote4 the closest preceding English word that has aligned foreign words. The aligned foreign positions of [i] are combined into a center position ⊙[i], the rounded average of the positions. Now, the distortion probability for the first word (k = 1) is p=1(di,1|⊙[i], A(fi,1), B(e[i]), J) , where A gives the word class of a foreign word and B the word class of an English word (there are typically 50 classes per language, derived by machine learning techniques). The probability is further reduced to a dependency on the difference of the positions, i.e. p=1(di,1−⊙[i] | A(fi,1), B(e[i])). For k > 1 the model is p>1(di,k|di,k−1, A(fi,k), J) , which is likewise reduced to p>1(di,k − di,k−1 | A(fi,k)). Note that in both differencebased formulations the dependence on J has to be dropped to get closed-form solutions of the M-step in EM training, and Brown et al. note themselves that the IBM-4 can place words before the start and after the end of the sentence. Reducing Deficiency In this paper, we also investigate the effect of reducing the amount of wasted probability mass by enforcing the dependence on J by proper renormalization, i.e. using p=1(j|j′, A(fi,1), B(e[i]), J) = (3) p=1(j −j′|A(fi,1), B(e[i])) PJ j′′=1 p=1(j′′ −j′|A(fi,1), B(e[i])) , for the first aligned word and p>1(j|j′, A(fi,k), J) = (4) p>1(j −j′ | A(fi,k)) PJ j′′=1 p>1(j′′ −j′ | A(fi,k)) for all following words, again handling the M-step in EM training via projected gradient ascent. With this strategy words can no longer be placed outside the sentence, but a lot of probability mass is still wasted on configurations where at least one foreign (or predicted) position j aligns to two or more positions i, i′ in the English (or given) language (and consequently there are more unaligned 4If the set is empty, instead a sentence start probability is used. Note that we differ slightly in notation compared to (Brown et al., 1993). source words than the generated Φ0). Therefore, here, too, the probability for Φ0 has to be made deficient to get good performance. In summary, the base model for the IBM-4 is: p(fJ 1 , aJ 1 |eI 1) = p  Φ0| I X i=1 Φi  (5) · J Y j=1 p(fj|eaj) · IY i=1 p(Φi|ei) · Y i:Φi>0 h p=1(di,1 −⊙[i]|A(fi,1), B(e[i])) · Φi Y k=2 p>1(di,k −di,k−1|A(fi,k)) i , where empty products are understood to be 1. 2.3 IBM-5 We note in passing that the distortion model of the IBM-5 is nondeficient and has parameters for filling the nth open gap in the foreign sequence given that there are N positions to choose from – see the next section for exactly what positions one can choose from. There is also a dependence on word classes for the foreign language. This is neither a zero order nor a first order dependence, and in (Och and Ney, 2003) the first order model of the IBM-4, though deficient, outperformed the IBM-5. The IBM-5 is therefore rarely used in practice. This motivated us to instead reformulate IBM-3 and IBM-4 as nondeficient models. In our results, however, the IBM-5 gave surprisingly good results and was often superior to all variants of the IBM-4. 3 Nondeficient Variants of IBM-3 and IBM-4 From now on we always enforce that for each position i the indices di,k are generated in ascending order (di,k > di,k−1 for k > 1). A central concept for the generation of di,k in step 3(b) is the set of positions in the foreign sequence that are still without alignment. We denote the set of these positions by Ji,k,J = {1, . . . , J} −{di,k′ | 1 ≤k′ < k} −{di′,k′ | 1 ≤i′ < i, 1 ≤k′ ≤Φi′} where the dependence on the various di′,k′ is not made explicit in the following. It is tempting to think that in a nondeficient model all members of Ji,k,J can be chosen for 25 di,k, but this holds only Φi = 1. Otherwise, the requirement of generating the di,k in ascending order prevents us from choosing the (Φi−k) largest entries in Ji,k,J. For k > 1 we also have to remove all positions smaller than di,k−1. Let J Φi i,k,J denote the set where these positions have been removed. With that, we can state the nondeficient variants of IBM-3 and IBM-4. 3.1 Nondeficient IBM-3 For the IBM-3, we define the auxiliary quantity q(di,k = j | i, J Φi i,k,J) = p(j|i) if j ∈J Φi i,k,J 0 else , where we use the zero order parameters p(j|i) we also use for the standard (deficient) IBM-3, compare (2). To get a nondeficient variant, it remains to renormalize, resulting in p(di,k = j|i, J Φi i,k,J) = q(j|i, J Φi i,k,J) PJ j=1 q(j|i, J Φi i,k,J) . (6) Further, note that the factors Φi! now have to be removed from (1) as the di,k are generated in ascending order. Lastly, here we use the original nondeficient empty word model p(Φ0| PI i=1 Φi), resulting in a totally nondeficient model. 3.2 Nondeficient IBM-4 With the notation set up, it is rather straightforward to derive a nondeficient variant of the IBM4. Here, there are the two cases k = 1 and k > 1. We begin with the case k = 1. Abbreviating α = A(fi,1) and β = B(e[i]), we define the auxiliary quantity q=1(di,1 = j|⊙[i], α, β, J Φi i,k,J) = (7) p=1(j −⊙[i]|α, β) if j ∈J Φi i,k,J 0 else , again using the - now first order - parameters of the base model. The nondeficient distribution p=1(di,1 = j|⊙[i], α, β, J Φi i,k,J) is again obtained by renormalization. For the case k > 1, we abbreviate α = A(fi,k) and introduce the auxiliary quantity q>1(di,k = j|di,k−1, α, J Φi i,k,J) = (8) p>1(j −di,k−1|α) if j ∈J Φi i,k,J 0 else , from which the nondeficient distribution p>1(di,k=j|di,k−1, α, J Φi i,k,J) is again obtained by renormalization. 4 Training the New Variants For the task of word alignment, we infer the parameters of the models using the maximum likelihood criterion max θ S Y s=1 pθ(fs|es) (9) on a set of training data (i.e. sentence pairs s = 1, . . . , S). Here, θ comprises all base parameters of the respective model (e.g. for the IBM-3 all p(f|e), all p(Φ, e) and all p(j|i) ) and pθ signifies the dependence of the model on the parameters. Note that (9) is truly a constrained optimization problem as the parameters θ have to satisfy a number of probability normalization constraints. When pθ(·) denotes a fertility based model the resulting problem is a non-concave maximization problem with many local minima and no (known) closed-form solutions. Hence, it is handled by computational methods, which typically apply the logarithm to the above function. Our method of choice to attack the maximum likelihood problem is expectation maximization (EM), the standard in the field, which we explain below. Due to non-concaveness the starting point for EM is of extreme importance. As is common, we first train an IBM-1 and then an HMM before proceeding to the IBM-3 and finally the IBM-4. As in the training of the deficient IBM-3 and IBM-4 models, we approximate the expectations in the E-step by a set of likely alignments, ideally centered around the Viterbi alignment, but already for the regular deficient variants computing it is NP-hard (Udupa and Maji, 2006). A first task is therefore to compute such a set. This task is also needed for the actual task of word alignment (annotating a given sentence pair with an alignment). 4.1 Alignment Computation For computing alignments, we use the common procedure of hillclimbing where we start with an alignment, then iteratively compute the probabilities of all alignments differing by a move or a swap (Brown et al., 1993) and move to the best of these if it beats the current alignment. Since we cannot ignore parts of the history and still get a nondeficient model, computing the probabilities of the neighbors cannot be handled incrementally (or rather only partially, for the dictionary and fertility models). While this does increase running times, in practice the M-steps take longer than the E-steps. 26 For self-containment, we recall here that for an alignment aJ 1 applying the move aJ 1 [j →i] results in the alignment ˆaJ 1 defined by ˆaj =i and ˆaj′ =aj′ for j′ ̸=j. Applying the swap aJ 1 [j1 ↔j2] results in the alignment ˆaJ 1 defined by ˆaj1 =aj2, ˆaj2 =aj1 and ˆaj′ = aj′ elsewhere. If aJ 1 is the alignment produced by hillclimbing, the move matrix m ∈ IRJ×I+1 is defined by mj,i being the probability of aJ 1 [j →i] as long as aj ̸= i, otherwise 0. Likewise the swap matrix s ∈IRJ×J is defined as sj1,j2 being the probability of aJ 1 [j1 ↔j2] for aj1 ̸=aj2, 0 otherwise. The move and swap matrices are used to approximate expectations in EM training (see below). 4.2 Parameter Update Naive Scheme It is tempting to account for the changes in the model in hillclimbing, but to otherwise use the regular M-step procedures (closed form solution when not conditioning on J for the IBM-4 and for the non-parametric IBM-3, otherwise projected gradient ascent) for the deficient models. However, we verified that this is not a good idea: not only can the likelihood go down in the process (even if we could compute expectations exactly), but these schemes also heavily increase p0 in each iteration, i.e. the same problem Och and Ney (2003) found for the deficient models. There is therefore the need to execute the Mstep properly, and when done the problem is indeed resolved. Proper EM The expectation maximization (EM) framework (Dempster et al., 1977; Neal and Hinton, 1998) is a class of template procedures (rather than a proper algorithm) that iteratively requires solving the task max θk S X s=1 X as pθk−1(as|fs, es) log pθk(fs, as|es)  (10) by appropriate means. Here, θk−1 are the parameters from the previous iteration, while θk are those derived in the current iteration. Of course, here and in the following the normalization constraints on θ apply, as already in (9). On explicit request of a reviewer we give a detailed account for our setting here. Readers not interested in the details can safely move on to the next section. Details on EM For the corpora occurring in practice, the function (10) has many more terms than there are atoms in the universe. The trick is that pθk(fs, as|es) is a product of factors, where each factor depends on very few components of θk only. Taking the logarithm gives a sum of logarithms, and in the end we are left with the problem of computing the weights of each factor, which turn out to be expectations. To apply this to the (deficient) IBM-3 model with parametric distortion we simplify pθk−1(as|fs, es) = p(as) and define the counts nf,e(as) = PJs j=1 δ(fs j , f) · δ(es as j, e), nΦ,e(as) = PIs i=1 δ(es i, e)·δ(Φi(as), Φ) and nj,i(as) = δ(as j, i). We also use short hand notations for sets, e.g. {p(f|e)} is meant as the set of all translation probabilities induced by the given corpus. With this notation, after reordering the terms problem (10) can be written as max {p(f|e)},{p(Φ|e)},{p(j|i)} (11) X e,f h S X s=1 X as p(as) nf,e(as) i log p(f|e)  + X e,Φ h S X s=1 X as p(as) nΦ,e(as) i log p(Φ, e)  + X i,j h S X s=1 X as p(as) nj,i(as) i log p(j|i, J)  . Indeed, the weights in each line turn out to be nothing else than expectations of the respective factor under the distribution pθk−1(as|fs, es) and will henceforth be written as wf,e, wΦ,e and wj,i,J. Therefore, executing an iteration of EM requires first calculating all expectations (E-step) and then solving the maximization problems (M-step). For models such as IBM-1 and HMM the expectations can be calculated efficiently, so the enormous sum of terms in (10) is equivalently written as a manageable one. In this case it can be shown5 that the new θk must have a higher likelihood (9) than θk−1 (unless a stationary point is reached). In fact, any θ that has a higher value in the auxiliary function (11) than θk−1 must also have a higher likelihood. This is an important background for parametric models such as (2) where the M-step cannot be solved exactly. For IBM-3/4/5 computing exact expectations is intractable (Udupa and Maji, 2006) and approximations have to be used (in fact, even computing the likelihood for a given θ is intractable). We 5See e.g. the author’s course notes (in German), currently http://user.phil-fak.uni-duesseldorf.de/ ˜tosch/downloads/statmt/wordalign.pdf. 27 use the common procedure based on hillclimbing and the move/swap matrices. The likelihood is not guaranteed to increase but it (or rather its approximation) always did in each of the five run iterations. Nevertheless, the main advantage of EM is preserved: problem (11) decomposes into several smaller problems, one for each probability distribution since the parameters are tied by the normalization constraints. The result is one problem for each e involving all p(f|e), one for each e involving all p(Φ|e) and one for each i involving all p(j|i). The problems for the translation probabilities and the fertility probabilities yield the known standard update rules. The most interesting case is the problem for the (parametric) distortion models. In the deficient setting, the problem for each i is max {p(j|i)} X J wi,j,J log p(j|i) PJ j′=1 p(j′|i) ! In the nondeficient setting, we now drop the subscripts i, k, J and the superscript Φ from the sets defined in the previous sections, i.e. we write J instead of J Φ i,k,J. The M-step problem is then max {p(j|i)} Ei = X j X J :j∈J wj,i,J log p(j|i, J )  , where wj,i,J (with j ∈J ) is the expectation for aligning j to i when one can choose among the positions in J , and with p(j|i, J ) as in (6). In principle there is an exponential number of expectations wj,i,J . However, since we approximate expectations from the move and swap matrices, and hence by O((I + J) · J) alignments per sentence pair, in the end we get a polynomial number of terms. Currently we only consider alignments with (approximated) pθk−1(as|fs, es) > 10−6. Importantly, the fact that we get separate M-step problems for different i allows us to reduce memory consumption by using refined data structures when storing the expectations. For both the deficient and the nondeficient variants, the M-step problems for the distortion parameters p(j|i) are non-trivial, non-concave and have no (known) closed form solutions. We approach them via the method of projected gradient ascent (PGA), where the gradient for the nondeficient problem is ∂Ei ∂p(j|i) = X J :j∈J " wj,J p(j|i) − P j′∈J wj′,J P j′∈J p(j′|i) # . When running PGA we guarantee that the resulting {p(j|i)} has a higher function value Ei than the input ones (unless a stationary point is input). We stop when a cutoff criterion indicates a local maximum or 250 iterations are used up. Projected Gradient Ascent This method is used in a couple of recent papers, notably (Schoenemann, 2011; Vaswani et al., 2012) and is briefly sketched here for self-containment (see those papers for more details). To solve a maximization problem max p(j|i)≥0,P j p(j|i)=1 Ei({p(j|i)}) for some (differentiable) function Ei(·), one iteratively starts at the current point {pk(j|i)}, computes the gradient ∇Ei({pk(j|i)}) and goes to the point q(j|i) = pk(j|i) + α∇Ei(pk(j|i)) , j = 1, . . . , J for some step-length α. This point is generally not a probability distribution, so one computes the nearest probability distribution min q′(j|i)≥0,P j q′(j|i)=1 J X j=1 q′(j|i) −q(j|i) 2 , a step known as projection which we solve with the method of (Michelot, 1986). The new distribution {q′(j|i)} is not guaranteed to have a higher Ei(·), but (since the constraint set is a convex one) a suitable interpolation of {pk(j|i)} and {q′(j|i)} is guaranteed to have a higher value (unless {pk(j|i)} is a local maximum or minimum of Ei(·)). Such a point is computed by backtracking line search and defines the next iterate {pk+1(j|i)}. IBM-4 When moving from the IBM-3 to the IBM-4, only the last line in (11) changes. In the end one gets two new kinds of problems, for p=1(·) and p>1(·). For p=1(·) we have one problem for each foreign class α and each English class β, of the form max {p=1(j|j′,α,β)} X j,j′,J wj,j′,J,α,β log p=1(j|j′, α, β, J)  for reduced deficiency (with p=1(j|j′, α, β, J) as in (3) ) and of the form max {p=1(j|j′,α,β)} X j,j′,J wj,j′,J ,α,β log p=1(j|j′, α, β, J )  28 Model Degree of Deficiency De|En En|De Es|En En|Es HMM nondeficient (our) 73.8 77.6 77.4 76.1 IBM-3 full (GIZA++) 74.2 76.5 74.3 74.5 IBM-3 full (our) 75.6 79.2 75.2 73.7 IBM-3 nondeficient (our) 76.1 79.8 76.8 75.5 IBM-4, 1 x 1 word class full (GIZA++) 77.9 79.4 78.6 78.4 IBM-4, 1 x 1 word class full (our) 76.1 81.5 77.8 78.0 IBM-4, 1 x 1 word class reduced (our) 77.2 80.6 77.9 78.3 IBM-4, 1 x 1 word class nondeficient (our) 77.6 81.5 80.0 78.4 IBM-4, 50 x 50 word classes full (GIZA++) 78.6 80.4 79.3 79.3 IBM-4, 50 x 50 word classes full (our) 78.0 82.4 79.2 79.4 IBM-4, 50 x 50 word classes reduced (our) 78.5 82.1 79.2 79.0 IBM-4, 50 x 50 word classes nondeficient (our) 77.9 82.5 79.7 78.2 IBM-5, 50 word classes nondeficient (GIZA++) 79.4 81.1 80.0 79.5 IBM-5, 50 word classes nondeficient (our) 79.2 82.7 79.7 79.5 Table 1: Alignment accuracy (weighted F-measure times 100, α = 0.1) on Europarl with 100.000 sentence pairs. Reduced deficiency means renormalization as in (3) and (4), so that words cannot be placed before or after the sentence. For the IBM-3, the nondeficient variant is clearly best. For the IBM-4 it is better in roughly half the cases, both with and without word classes. for the nondeficient variant, with p=1(j|j′, α, β, J ) based on (7). For p>1(·) we have one problem per foreign class α, of the form max {p>1(j|j′,α)} X j,j′,J wj,j′,J,α log p>1(j|j′, α, J)  for reduced deficiency, with p>1(j|j′, α, J) based on (4), and for the nondeficient variant it has the form max {p>1(j|j′,α)} X j,j′,J wj,j′,J ,α log p>1(j|j′, α, J )  , with p>1(j|j′, α, J ) based on (8). Calculating the gradients is analogous to the IBM-3. 5 Experiments We test the proposed methods on subsets of the Europarl corpus for German and English as well as Spanish and English, using lower-cased corpora. We evaluate alignment accuracies on gold alignments6 in the form of weighted F-measures with α=0.1, which performed well in (Fraser and Marcu, 2007b). In addition we evaluate the effect on phrase-based translation on one of the tasks. We implement the proposed methods in our own framework RegAligner rather than GIZA++, 6from (Lambert et al., 2005) and from http://user.phil-fak.uni-duesseldorf.de/ ˜tosch/downloads.html. which is only rudimentally maintained. Therefore, we compare to the deficient models in our own software as well as to those in GIZA++. We run 5 iterations of IBM-1, followed by 5 iterations of HMM, 5 of IBM-3 and finally 5 of IBM-4. The first iteration of the IBM-3 collects counts from the HMM, and likewise the first iteration of the IBM-4 collects counts from the IBM3 (in both cases the move and swap matrices are filled with probabilities of the former model, then theses matrices are used as in a regular model iteration). A nondeficient IBM-4 is always initialized by a nondeficient IBM-3. We did not set a fertility limit (except for GIZA++). Experiments were run on a Core i5 with 2.5 GHz and 8 GB of memory. The latter was the main reason why we did not use still larger corpora7. The running times for the entire training were half a day without word classes and a day with word classes. With 50 instead of 250 PGA iterations in all M-steps we get only half these running times, but the resulting F-measures deteriorate, especially for the IBM-4 with classes. The running times of our implementation of the IBM-5 are much more favorable: the entire training then runs in little more than an hour. 7The main memory bottleneck is the IBM-4 (6 GB without classes, 8 GB with). Using refined data structures should reduce this bottleneck. 29 5.1 Alignment Accuracy The alignment accuracies – weighted F-measures with α = 0.1 – for the tested corpora and model variants are given in Table 1. Clearly, nondeficiency greatly improves the accuracy of the IBM3, both compared to our deficient implementation and that of GIZA++. For the IBM-4 we get improvements for the nondeficient variant in roughly half the cases, both with and without word classes. We think this is an issue of local minima, inexactly solved M-steps and sensitiveness to initialization. Interestingly, also the reduced deficient IBM-4 is not always better than the fully deficient variant. Again, we think this is due to problems with the non-concave nature of the models. There is also quite some surprise regarding the IBM-5: contrary to the findings of (Och and Ney, 2003) the IBM-5 in GIZA++ performs best in three out of four cases - when competing with both deficient and nondeficient variants of IBM-3 and IBM-4. Our own implementation gives slightly different results (as we do not use smoothing), but it, too, performs very well. 5.2 Effect on Translation Performance We also check the effect of the various alignments (all produced by RegAligner) on translation performance for phrase-based translation, randomly choosing translation from German to English. We use MOSES with a 5-gram language model (trained on 500.000 sentence pairs) and the standard setup in the MOSES Experiment Management System: training is run in both directions, the alignments are combined using diag-grow-final-and (Och and Ney, 2003) and the parameters of MOSES are optimized on 750 development sentences. The resulting BLEU-scores are shown in Table 2. However, the table shows no clear trends and even the IBM-3 is not clearly inferior to the IBM4. We think that one would need to handle larger corpora (or run multiple instances of Minimum Error Rate Training with different random seeds) to get more meaningful insights. Hence, at present our paper is primarily of theoretical value. 6 Conclusion We have shown that the word alignment models IBM-3 and IBM-4 can be turned into nondeficient Model #Classes Deficiency BLEU HMM nondeficient 29.72 IBM-3 deficient 29.63 IBM-3 nondeficient 29.73 IBM-4 1 x 1 fully deficient 29.91 IBM-4 1 x 1 reduced deficient 29.88 IBM-4 1 x 1 nondeficient 30.18 IBM-4 50 x 50 fully deficient 29.86 IBM-4 50 x 50 reduced deficient 30.14 IBM-4 50 x 50 nondeficient 29.90 IBM-5 50 nondeficient 29.84 Table 2: Evaluation of phrase-based translation from German to English with the obtained alignments (for 100.000 sentence pairs). Training is run in both directions and the resulting alignments are combined via diag-grow-final-and. The table shows no clear superiority of any method. In fact, the IBM-4 is not superior to the IBM-3 and the HMM is about equal to the IBM-3. We think that one needs to handle larger corpora to get clearer insights. variants, an important aim of probabilistic modeling for word alignment. Here we have exploited that the models are proper applications of the chain rule of probabilities, where deficiency is only introduced by ignoring parts of the history for the distortion factors in the factorization. By proper renormalization the desired nondeficient variants are obtained. The arising models are trained via expectation maximization. In the E-step we use hillclimbing to get a likely alignment (ideally the Viterbi alignment). While this cannot be handled fully incrementally, it is still fast enough in practice. The M-step energies are non-concave and have no (known) closed-form solutions. They are handled via projected gradient ascent. For the IBM-3 nondeficiency clearly improves alignment accuracy. For the IBM-4 we get improved accuracies in roughly half the cases, both with and without word classes. The IBM-5 performs surprisingly well, it is often best and hence much better than its reputation. An evaluation of phrase based translation showed no clear insights. Nevertheless, we think that nondeficiency in fertility based models is an important issue, and that at the very least our paper is of theoretical value. The implementations are publicly available in RegAligner 1.2. 30 References M. Bansal, C. Quirk, and R. Moore. 2011. Gappy phrasal alignment by agreement. In Annual Meeting of the Association for Computational Linguistics (ACL), Portland, Oregon, June. P.F. Brown, S.A. Della Pietra, V.J. Della Pietra, and R.L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263–311, June. F. Cromi`eres and S. Kurohashi. 2009. An alignment algorithm using Belief Propagation and a structurebased distortion model. In Conference of the European Chapter of the Association for Computational Linguistics (EACL), Athens, Greece, April. A.P. Dempster, N.M. Laird, and D.B. Rubin. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B, 39(1):1–38. Y. Deng and W. Byrne. 2005. HMM word and phrase alignment for statistical machine translation. In HLT-EMNLP, Vancouver, Canada, October. A. Fraser and D. Marcu. 2007a. Getting the structure right for word alignment: LEAF. In Conference on Empirical Methods in Natural Language Processing (EMNLP), Prague, Czech Republic, June. A. Fraser and D. Marcu. 2007b. Measuring word alignment quality for statistical machine translation. Computational Linguistics, 33(3):293–303, September. J. Grac¸a, K. Ganchev, and B. Taskar. 2010. Learning tractable word alignment models with complex constraints. Computational Linguistics, 36, September. P. Lambert, A.D. Gispert, R. Banchs, and J.B. Marino. 2005. Guidelines for word alignment evaluation and manual alignment. Language Resources and Evaluation, 39(4):267–285. P. Liang, B. Taskar, and D. Klein. 2006. Alignment by agreement. In Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, New York, New York, June. D. Marcu and W. Wong. 2002. A phrase-based, joint probability model for statistical machine translation. In Conference on Empirical Methods in Natural Language Processing (EMNLP), Philadelphia, Pennsylvania, July. D. Melamed. 2000. Models of translational equivalence among words. Computational Linguistics, 26(2):221–249. C. Michelot. 1986. A finite algorithm for finding the projection of a point onto the canonical simplex of IRn. Journal on Optimization Theory and Applications, 50(1), July. R.M. Neal and G.E. Hinton. 1998. A view of the EM algorithm that justifies incremental, sparse, and other variants. In M.I. Jordan, editor, Learning in Graphical Models. MIT press. F.J. Och and H. Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19–51. S. Ravi and K. Knight. 2010. Does GIZA++ make search errors? Computational Linguistics, 36(3). T. Schoenemann. 2010. Computing optimal alignments for the IBM-3 translation model. In Conference on Computational Natural Language Learning (CoNLL), Uppsala, Sweden, July. T. Schoenemann. 2011. Regularizing mono- and biword models for word alignment. In International Joint Conference on Natural Language Processing (IJCNLP), Chiang Mai, Thailand, November. E. Sumita, Y. Akiba, T. Doi, A. Finch, K. Imamura, H. Okuma, M. Paul, M. Shimohata, and T. Watanabe. 2004. EBMT, SMT, Hybrid and more: ATR spoken language translation system. In International Workshop on Spoken Language Translation (IWSLT), Kyoto, Japan, September. K. Toutanova, H.T. Ilhan, and C.D. Manning. 2002. Extensions to HMM-based statistical word alignment models. In Conference on Empirical Methods in Natural Language Processing (EMNLP), Philadelphia, Pennsylvania, July. R. Udupa and H.K. Maji. 2006. Computational complexity of statistical machine translation. In Conference of the European Chapter of the Association for Computational Linguistics (EACL), Trento, Italy, April. A. Vaswani, L. Huang, and D. Chiang. 2012. Smaller alignment models for better translations: Unsupervised word alignment with the l0-norm. In Annual Meeting of the Association for Computational Linguistics (ACL), Jeju, Korea, July. S. Vogel, H. Ney, and C. Tillmann. 1996. HMM-based word alignment in statistical translation. In International Conference on Computational Linguistics (COLING), pages 836–841, Copenhagen, Denmark, August. Y.-Y. Wang and A. Waibel. 1998. Modeling with structures in statistical machine translation. In International Conference on Computational Linguistics (COLING), Montreal, Canada, August. 31
2013
3
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 302–310, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics A Context Free TAG Variant Ben Swanson Brown University Providence, RI [email protected] Eugene Charniak Brown University Providence, RI [email protected] Elif Yamangil Harvard University Cambridge, MA [email protected] Stuart Shieber Harvard University Cambridge, MA [email protected] Abstract We propose a new variant of TreeAdjoining Grammar that allows adjunction of full wrapping trees but still bears only context-free expressivity. We provide a transformation to context-free form, and a further reduction in probabilistic model size through factorization and pooling of parameters. This collapsed context-free form is used to implement efficient grammar estimation and parsing algorithms. We perform parsing experiments the Penn Treebank and draw comparisons to TreeSubstitution Grammars and between different variations in probabilistic model design. Examination of the most probable derivations reveals examples of the linguistically relevant structure that our variant makes possible. 1 Introduction While it is widely accepted that natural language is not context-free, practical limitations of existing algorithms motivate Context-Free Grammars (CFGs) as a good balance between modeling power and asymptotic performance (Charniak, 1996). In constituent-based parsing work, the prevailing technique to combat this divide between efficient models and real world data has been to selectively strengthen the dependencies in a CFG by increasing the grammar size through methods such as symbol refinement (Petrov et al., 2006). Another approach is to employ a more powerful grammatical formalism and devise constraints and transformations that allow use of essential efficient algorithms such as the Inside-Outside algorithm (Lari and Young, 1990) and CYK parsing. Tree-Adjoining Grammar (TAG) is a natural starting point for such methods as it is the canonical member of the mildly context-sensitive family, falling just above CFGs in the hierarchy of formal grammars. TAG has a crucial advantage over CFGs in its ability to represent long distance interactions in the face of the interposing variations that commonly manifest in natural language (Joshi and Schabes, 1997). Consider, for example, the sentences These pretzels are making me thirsty. These pretzels are not making me thirsty. These pretzels that I ate are making me thirsty. Using a context-free language model with proper phrase bracketing, the connection between the words pretzels and thirsty must be recorded with three separate patterns, which can lead to poor generalizability and unreliable sparse frequency estimates in probabilistic models. While these problems can be overcome to some extent with large amounts of data, redundant representation of patterns is particularly undesirable for systems that seek to extract coherent and concise information from text. TAG allows a linguistically motivated treatment of the example sentences above by generating the last two sentences through modification of the first, applying operations corresponding to negation and the use of a subordinate clause. Unfortunately, the added expressive power of TAG comes with O(n6) time complexity for essential algorithms on sentences of length n, as opposed to O(n3) for the CFG (Schabes, 1990). This makes TAG infeasible to analyze real world data in a reasonable time frame. In this paper, we define OSTAG, a new way to constrain TAG in a conceptually simple way so 302 S NP VP NP NP DT the NN lack NP NNS computers VP VBP do RB not VP NP NP PP NP PRP I PP IN of PRP them VP VB fear Figure 1: A simple Tree-Substitution Grammar using S as its start symbol. This grammar derives the sentences from a quote of Isaac Asimov’s - “I do not fear computers. I fear the lack of them.” that it can be reduced to a CFG, allowing the use of traditional cubic-time algorithms. The reduction is reversible, so that the original TAG derivation can be recovered exactly from the CFG parse. We provide this reduction in detail below and highlight the compression afforded by this TAG variant on synthetic formal languages. We evaluate OSTAG on the familiar task of parsing the Penn Treebank. Using an automatically induced Tree-Substitution Grammar (TSG), we heuristically extract an OSTAG and estimate its parameters from data using models with various reduced probabilistic models of adjunction. We contrast these models and investigate the use of adjunction in the most probable derivations of the test corpus, demonstating the superior modeling performance of OSTAG over TSG. 2 TAG and Variants Here we provide a short history of the relevant work in related grammar formalisms, leading up to a definition of OSTAG. We start with contextfree grammars, the components of which are ⟨N, T, R, S⟩, where N and T are the sets of nonterminal and terminal symbols respectively, and S is a distinguished nonterminal, the start symbol. The rules R can be thought of as elementary trees of depth 1, which are combined by substituting a derived tree rooted at a nonterminal X at some leaf node in an elementary tree with a frontier node labeled with that same nonterminal. The derived trees rooted at the start symbol S are taken to be the trees generated by the grammar. 2.1 Tree-Substitution Grammar By generalizing CFG to allow elementary trees in R to be of depth greater than or equal to 1, we get the Tree-Substitution Grammar. TSG remains in the family of context-free grammars, as can be easily seen by the removal of the internal nodes in all elementary trees; what is left is a CFG that generates the same language. As a reversible alternative that preserves the internal structure, annotation of each internal node with a unique index creates a large number of deterministic CFG rules that record the structure of the original elementary trees. A more compact CFG representation can be obtained by marking each node in each elementary tree with a signature of its subtree. This transform, presented by Goodman (2003), can rein in the grammar constant G, as the crucial CFG algorithms for a sentence of length n have complexity O(Gn3). A simple probabilistic model for a TSG is a set of multinomials, one for each nonterminal in N corresponding to its possible substitutions in R. A more flexible model allows a potentially infinite number of substitution rules using a Dirichlet Process (Cohn et al., 2009; Cohn and Blunsom, 2010). This model has proven effective for grammar induction via Markov Chain Monte Carlo (MCMC), in which TSG derivations of the training set are repeatedly sampled to find frequently occurring elementary trees. A straightforward technique for induction of a finite TSG is to perform this nonparametric induction and select the set of rules that appear in at least one sampled derivation at one or several of the final iterations. 2.2 Tree-Adjoining Grammar Tree-adjoining grammar (TAG) (Joshi, 1985; Joshi, 1987; Joshi and Schabes, 1997) is an extension of TSG defined by a tuple ⟨N, T, R, A, S⟩, and differs from TSG only in the addition of a 303 VP always VP VP* quickly + S NP VP runs ⇒ S NP VP always VP VP runs quickly Figure 2: The adjunction operation combines the auxiliary tree (left) with the elementary tree (middle) to form a new derivation (right). The adjunction site is circled, and the foot node of the auxiliary tree is denoted with an asterisk. The OSTAG constraint would disallow further adjunction at the bold VP node only, as it is along the spine of the auxiliary tree. set of auxiliary trees A and the adjunction operation that governs their use. An auxiliary tree α is an elementary tree containing a single distinguished nonterminal leaf, the foot node, with the same symbol as the root of α. An auxiliary tree with root and foot node X can be adjoined into an internal node of an elementary tree labeled with X by splicing the auxiliary tree in at that internal node, as pictured in Figure 2. We refer to the path between the root and foot nodes in an auxiliary tree as the spine of the tree. As mentioned above, the added power afforded by adjunction comes at a serious price in time complexity. As such, probabilistic modeling for TAG in its original form is uncommon. However, a large effort in non-probabilistic grammar induction has been performed through manual annotation with the XTAG project(Doran et al., 1994). 2.3 Tree Insertion Grammar Tree Insertion Grammars (TIGs) are a longstanding compromise between the intuitive expressivity of TAG and the algorithmic simplicity of CFGs. Schabes and Waters (1995) showed that by restricting the form of the auxiliary trees in A and the set of auxiliary trees that may adjoin at particular nodes, a TAG generates only context-free languages. The TIG restriction on auxiliary trees states that the foot node must occur as either the leftmost or rightmost leaf node. This introduces an important distinction between left, right, and wrapping auxiliary trees, of which only the first two are allowed in TIG. Furthermore, TIG disallows adjunction of left auxiliary trees on the spines of right auxiliary trees, and vice versa. This is to prevent the construction of wrapping auxiliary trees, whose removal is essential for the simplified complexity of TIG. Several probabilistic models have been proposed for TIG. While earlier approaches such as Hwa (1998) and Chiang (2000) relied on hueristic induction methods, they were nevertheless sucessful at parsing. Later approaches (Shindo et al., 2011; Yamangil and Shieber, 2012) were able to extend the non-parametric modeling of TSGs to TIG, providing methods for both modeling and grammar induction. 2.4 OSTAG Our new TAG variant is extremely simple. We allow arbitrary initial and auxiliary trees, and place only one restriction on adjunction: we disallow adjunction at any node on the spine of an auxiliary tree below the root (though we discuss relaxing that constraint in Section 4.2). We refer to this variant as Off Spine TAG (OSTAG) and note that it allows the use of full wrapping rules, which are forbidden in TIG. This targeted blocking of recursion has similar motivations and benefits to the approximation of CFGs with regular languages (Mohri and jan Nederhof, 2000). The following sections discuss in detail the context-free nature of OSTAG and alternative probabilistic models for its equivalent CFG form. We propose a simple but empirically effective heuristic for grammar induction for our experiments on Penn Treebank data. 3 Transformation to CFG To demonstrate that OSTAG has only contextfree power, we provide a reduction to context-free grammar. Given an OSTAG ⟨N, T, R, A, S⟩, we define the set N of nodes of the corresponding CFG to be pairs of a tree in R or A together with an 304 α: S T x T y β: T a T* a γ: T b T* b S → X Y S → X Y X → x X → x Y → y Y → y X → A X → B Y → A′ Y → B′ A → a X′ a X → a X a A′ → a Y ′ a Y → a Y a X′ → X Y ′ → Y B → b X′′ b X → b X b B′ → b Y ′′ b Y → b Y b X′′ → X Y ′′ → Y (a) (b) (c) Figure 3: (a) OSTAG for the language wxwRvyvR where w, v ∈{a|b}+ and R reverses a string. (b) A CFG for the same language, which of necessity must distinguish between nonterminals X and Y playing the role of T in the OSTAG. (c) Simplified CFG, conflating nonterminals, but which must still distinguish between X and Y . address (Gorn number (Gorn, 1965)) in that tree. We take the nonterminals of the target CFG grammar to be nodes or pairs of nodes, elements of the set N +N ×N. We notate the pairs of nodes with a kind of “applicative” notation. Given two nodes η and η′, we notate a target nonterminal as η(η′). Now for each tree τ and each interior node η in τ that is not on the spine of τ, with children η1, . . . , ηk, we add a context-free rule to the grammar η →η1 · · · ηk (1) and if interior node η is on the spine of τ with ηs the child node also on the spine of τ (that is, dominating the foot node of τ) and η′ is a node (in any tree) where τ is adjoinable, we add a rule η(η′) →η1 · · · ηs(η′) · · · ηk . (2) Rules of type (1) handle the expansion of a node not on the spine of an auxiliary tree and rules of type (2) a spinal node. In addition, to initiate adjunction at any node η′ where a tree τ with root η is adjoinable, we use a rule η′ →η(η′) (3) and for the foot node ηf of τ, we use a rule ηf(η) →η (4) The OSTAG constraint follows immediately from the structure of the rules of type (2). Any child spine node ηs manifests as a CFG nonterminal ηs(η′). If child spine nodes themselves allowed adjunction, we would need a type (3) rule of the form ηs(η′) →ηs(η′)(η′′). This rule itself would feed adjunction, requiring further stacking of nodes, and an infinite set of CFG nonterminals and rules. This echoes exactly the stacking found in the LIG reduction of TAG . To handle substitution, any frontier node η that allows substitution of a tree rooted with node η′ engenders a rule η →η′ (5) This transformation is reversible, which is to say that each parse tree derived with this CFG implies exactly one OSTAG derivation, with substitutions and adjunctions coded by rules of type (5) and (3) respectively. Depending on the definition of a TAG derivation, however, the converse is not necessarily true. This arises from the spurious ambiguity between adjunction at a substitution site (before applying a type (5) rule) versus the same adjunction at the root of the substituted initial tree (after applying a type (5) rule). These choices lead to different derivations in CFG form, but their TAG derivations can be considered conceptually 305 identical. To avoid double-counting derivations, which can adversely effect probabilistic modeling, type (3) and type (4) rules in which the side with the unapplied symbol is a nonterminal leaf can be omitted. 3.1 Example The grammar of Figure 3(a) can be converted to a CFG by this method. We indicate for each CFG rule its type as defined above the production arrow. All types are used save type (5), as substitution is not employed in this example. For the initial tree α, we have the following generated rules (with nodes notated by the tree name and a Gorn number subscript): αϵ 1−→ α1 α2 α1 3−→ βϵ(α1) α1 1−→ x α1 3−→ γϵ(α1) α2 1−→ y α2 3−→ βϵ(α2) α2 3−→ γϵ(α2) For the auxiliary trees β and γ we have: βϵ(α1) 2−→ a β1(α1) a βϵ(α2) 2−→ a β1(α2) a β1(α1) 4−→ α1 β1(α2) 4−→ α2 γϵ(α1) 2−→ b γ1(α1) b γϵ(α2) 2−→ b γ1(α2) b γ1(α1) 4−→ α1 γ1(α2) 4−→ α2 The grammar of Figure 3(b) is simply a renaming of this grammar. 4 Applications 4.1 Compact grammars The OSTAG framework provides some leverage in expressing particular context-free languages more compactly than a CFG or even a TSG can. As an example, consider the language of bracketed palindromes Pal = ai w ai wR ai 1 ≤i ≤k w ∈{bj | 1 ≤j ≤m}∗ containing strings like a2 b1b3 a2 b3b1 a2. Any TSG for this language must include as substrings some subpalindrome constituents for long enough strings. Whatever nonterminal covers such a string, it must be specific to the a index within it, and must introduce at least one pair of bs as well. Thus, there are at least m such nonterminals, each introducing at least k rules, requiring at least km rules overall. The simplest such grammar, expressed as a CFG, is in Figure 4(a). The ability to use adjunction allows expression of the same language as an OSTAG with k + m elementary trees (Figure 4(b)). This example shows that an OSTAG can be quadratically smaller than the corresponding TSG or CFG. 4.2 Extensions The technique in OSTAG can be extended to expand its expressiveness without increasing generative capacity. First, OSTAG allows zero adjunctions on each node on the spine below the root of an auxiliary tree, but any non-zero finite bound on the number of adjunctions allowed on-spine would similarly limit generative capacity. The tradeoff is in the grammar constant of the effective probabilistic CFG; an extension that allows k levels of on spine adjunction has a grammar constant that is O(|N|(k+2)). Second, the OSTAG form of adjunction is consistent with the TIG form. That is, we can extend OSTAG by allowing on-spine adjunction of left- or right-auxiliary trees in keeping with the TIG constraints without increasing generative capacity. 4.3 Probabilistic OSTAG One major motivation for adherence to a contextfree grammar formalism is the ability to employ algorithms designed for probabilistic CFGs such as the CYK algorithm for parsing or the InsideOutside algorithm for grammar estimation. In this section we present a probabilistic model for an OSTAG grammar in PCFG form that can be used in such algorithms, and show that many parameters of this PCFG can be pooled or set equal to one and ignored. References to rules of types (1-5) below refer to the CFG transformation rules defined in Section 3. While in the preceeding discussion we used Gorn numbers for clarity, our discussion applies equally well for the Goodman transform discussed above, in which each node is labeled with a signature of its subtree. This simply redefines η in the CFG reduction described in Section 3 to be a subtree indicator, and dramatically reduces redundancy in the generated grammar. 306 S → ai Ti ai Ti → bj Ti bj Ti → ai αi | 1 ≤i ≤k: S ai T ai ai βj | 1 ≤j ≤m: T bj T* bj (a) (b) Figure 4: A CFG (a) and more compact OSTAG (b) for the language Pal The first practical consideration is that CFG rules of type (2) are deterministic, and as such we need only record the rule itself and no associated parameter. Furthermore, these rules employ a template in which the stored symbol appears in the left-hand side and in exactly one symbol on the right-hand side where the spine of the auxiliary tree proceeds. One deterministic rule exists for this template applied to each η, and so we may record only the template. In order to perform CYK or IO, it is not even necessary to record the index in the right-hand side where the spine continues; these algorithms fill a chart bottom up and we can simply propagate the stored nonterminal up in the chart. CFG rules of type (4) are also deterministic and do not require parameters. In these cases it is not necessary to record the rules, as they all have exactly the same form. All that is required is a check that a given symbol is adjoinable, which is true for all symbols except nonterminal leaves and applied symbols. Rules of type (5) are necessary to capture the probability of substitution and so we will require a parameter for each. At first glance, it would seem that due to the identical domain of the left-hand sides of rules of types (1) and (3) a parameter is required for each such rule. To avoid this we propose the following factorization for the probabilistic expansion of an off spine node. First, a decision is made as to whether a type (1) or (3) rule will be used; this corresponds to deciding if adjunction will or will not take place at the node. If adjunction is rejected, then there is only one type (1) rule available, and so parameterization of type (1) rules is unnecessary. If we decide on adjunction, one of the available type (3) rules is chosen from a multinomial. By conditioning the probability of adjunction on varying amounts of information about the node, alternative models can easily be defined. 5 Experiments As a proof of concept, we investigate OSTAG in the context of the classic Penn Treebank statistical parsing setup; training on section 2-21 and testing on section 23. For preprocessing, words that occur only once in the training data are mapped to the unknown categories employed in the parser of Petrov et al. (2006). We also applied the annotation from Klein and Manning (2003) that appends “-U” to each nonterminal node with a single child, drastically reducing the presence of looping unary chains. This allows the use of a coarse to fine parsing strategy (Charniak et al., 2006) in which a sentence is first parsed with the Maximum Likelihood PCFG and only constituents whose probability exceeds a cutoff of 10−4 are allowed in the OSTAG chart. Designed to facilitate sister adjunction, we define our binarization scheme by example in which the added nodes, indicated by @, record both the parent and head child of the rule. NP @NN-NP @NN-NP DT @NN-NP JJ NN SBAR A compact TSG can be obtained automatically using the MCMC grammar induction technique of Cohn and Blunsom (2010), retaining all TSG rules that appear in at least one derivation in after 1000 iterations of sampling. We use EM to estimate the parameters of this grammar on sections 2-21, and use this as our baseline. To generate a set of TAG rules, we consider each rule in our baseline TSG and find all possi307 All 40 #Adj #Wrap TSG 85.00 86.08 – – TSG′ 85.12 86.21 – – OSTAG1 85.42 86.43 1336 52 OSTAG2 85.54 86.56 1952 44 OSTAG3 85.86 86.84 3585 41 Figure 5: Parsing F-Score for the models under comparison for both the full test set and sentences of length 40 or less. For the OSTAG models, we list the number of adjunctions in the MPD of the full test set, as well as the number of wrapping adjunctions. ble auxiliary root and foot node pairs it contains. For each such root/foot pair, we include the TAG rule implied by removal of the structure above the root and below the foot. We also include the TSG rule left behind when the adjunction of this auxiliary tree is removed. To be sure that experimental gains are not due to this increased number of TSG initial trees, we calculate parameters using EM for this expanded TSG and use it as a second baseline (TSG′). With our full set of initial and auxiliary trees, we use EM and the PCFG reduction described above to estimate the parameters of an OSTAG. We investigate three models for the probability of adjunction at a node. The first uses a conservative number of parameters, with a Bernoulli variable for each symbol (OSTAG1). The second employs more parameters, conditioning on both the node’s symbol and the symbol of its leftmost child (OSTAG2).The third is highly parameterized but most prone to data sparsity, with a separate Bernoulli distribution for each Goodman index η (OSTAG3). We report results for Most Probable Derivation (MPD) parses of section 23 in Figure 5. Our results show that OSTAG outperforms both baselines. Furthermore, the various parameterizations of adjunction with OSTAG indicate that, at least in the case of the Penn Treebank, the finer grained modeling of a full table of adjunction probabilities for each Goodman index OSTAG3 overcomes the danger of sparse data estimates. Not only does such a model lead to better parsing performance, but it uses adjunction more extensively than its more lightly parameterized alternatives. While different representations make direct comparison inappropriate, the OSTAG results lie in the same range as previous work with statistical TIG on this task, such as Chiang (2000) (86.00) and Shindo et al. (2011) (85.03). The OSTAG constraint can be relaxed as described in Section 4.2 to allow any finite number of on-spine adjunctions without sacrificing contextfree form. However, the increase to the grammar constant quickly makes parsing with such models an arduous task. To determine the effect of such a relaxation, we allow a single level of on-spine adjunction using the adjunction model of OSTAG1, and estimate this model with EM on the training data. We parse sentences of length 40 or less in section 23 and observe that on-spine adjunction is never used in the MPD parses. This suggests that the OSTAG constraint is reasonable, at least for the domain of English news text. We performed further examination of the MPD using OSTAG for each of the sentences in the test corpus. As an artifact of the English language, the majority have their foot node on the left spine and would also be usable by TIG, and so we discuss the instances of wrapping auxiliary trees in these derivations that are uniquely available to OSTAG. We remove binarization for clarity and denote the foot node with an asterisk. A frequent use of wrapping adjunction is to coordinate symbols such as quotes, parentheses, and dashes on both sides of a noun phrase. One common wrapping auxiliary tree in our experiments is NP “ NP* ” PP This is used frequently in the news text of the Wall Street Journal for reported speech when avoiding a full quotation. This sentence is an example of the way the rule is employed, using what Joshi and Schabes (1997) referred to as “factoring recursion from linguistic constraints” with TAG. Note that replacing the quoted noun phrase and its following prepositional phrase with the noun phrase itself yields a valid sentence, in line with the linguistic theory underlying TAG. Another frequent wrapping rule, shown below, allows direct coordination between the contents of an appositive with the rest of the sentence. 308 NP NP , CC or NP* , This is a valuable ability, as it is common to use an appositive to provide context or explanation for a proper noun. As our information on proper nouns will most likely be very sparse, the appositive may be more reliably connected to the rest of the sentence. An example of this from one of the sentences in which this rule appears in the MPD is the phrase “since the market fell 156.83, or 8 %, a week after Black Monday”. The wrapping rule allows us to coordinate the verb “fell” with the pattern “X %” instead of 156.83, which is mapped to an unknown word category. These rules highlight the linguistic intuitions that back TAG; if their adjunction were undone, the remaining derivation would be a valid sentence that simply lacks the modifying structure of the auxiliary tree. However, the MPD parses reveal that not all useful adjunctions conform to this paradigm, and that left-auxiliary trees that are not used for sister adjunction are susceptible to this behavior. The most common such tree is used to create noun phrases such as P&G’s share of [the Japanese market] the House’s repeal of [a law] Apple’s family of [Macintosh Computers] Canada’s output of [crude oil] by adjoining the shared unbracketed syntax onto the NP dominating the bracketed text. If adjunction is taken to model modification, this rule drastically changes the semantics of the unmodified sentence. Furthermore, in some cases removing the adjunction can leave a grammatically incorrect sentence, as in the third example where the noun phrase changes plurality. While our grammar induction method is a crude (but effective) heuristic, we can still highlight the qualities of the more important auxiliary trees by examining aggregate statistics over the MPD parses, shown in Figure 6. The use of leftauxiliary trees for sister adjunction is a clear trend, as is the predominant use of right-auxiliary trees for the complementary set of “regular” adjunctions, which is to be expected in a right branching language such as English. The statistics also All Wrap Right Left Total 3585 (1374) 41 (26) 1698 (518) 1846 (830) Sister 2851 (1180) 17 (11) 1109 (400) 1725 (769) Lex 2244 (990) 28 (19) 894 (299) 1322 (672) FLex 1028 (558) 7 (2) 835 (472) 186 (84) Figure 6: Statistics for MPD auxiliary trees using OSTAG3. The columns indicate type of auxiliary tree and the rows correspond respectively to the full set found in the MPD, those that perform sister adjunction, those that are lexicalized, and those that are fully lexicalized. Each cell shows the number of tokens followed by the number of types of auxiliary tree that fit its conditions. reflect the importance of substitution in rightauxiliary trees, as they must capture the wide variety of right branching modifiers of the English language. 6 Conclusion The OSTAG variant of Tree-Adjoining Grammar is a simple weakly context-free formalism that still provides for all types of adjunction and is a bit more concise than TSG (quadratically so). OSTAG can be reversibly transformed into CFG form, allowing the use of a wide range of well studied techniques in statistical parsing. OSTAG provides an alternative to TIG as a context-free TAG variant that offers wrapping adjunction in exchange for recursive left/right spine adjunction. It would be interesting to apply both OSTAG and TIG to different languages to determine where the constraints of one or the other are more or less appropriate. Another possibility is the combination of OSTAG with TIG, which would strictly expand the abilities of both approaches. The most important direction of future work for OSTAG is the development of a principled grammar induction model, perhaps using the same techniques that have been successfully applied to TSG and TIG. In order to motivate this and other related research, we release our implementation of EM and CYK parsing for OSTAG1. Our system performs the CFG transform described above and optionally employs coarse to fine pruning and relaxed (finite) limits on the number of spine adjunctions. As a TSG is simply a TAG without adjunction rules, our parser can easily be used as a TSG estimator and parser as well. 1bllip.cs.brown.edu/download/bucketparser.tar 309 References Eugene Charniak, Mark Johnson, Micha Elsner, Joseph L. Austerweil, David Ellis, Isaac Haxton, Catherine Hill, R. Shrivaths, Jeremy Moore, Michael Pozar, and Theresa Vu. 2006. Multilevel coarse-to-fine PCFG parsing. In North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Eugene Charniak. 1996. Tree-bank grammars. In Association for the Advancement of Artificial Intelligence, pages 1031–1036. David Chiang. 2000. Statistical parsing with an automatically-extracted tree adjoining grammar. Association for Computational Linguistics. Trevor Cohn and Phil Blunsom. 2010. Blocked inference in bayesian tree substitution grammars. pages 225–230. Association for Computational Linguistics. Trevor Cohn, Sharon Goldwater, and Phil Blunsom. 2009. Inducing compact but accurate treesubstitution grammars. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 548–556. Association for Computational Linguistics. Christy Doran, Dania Egedi, Beth Ann Hockey, Bangalore Srinivas, and Martin Zaidel. 1994. XTAG system: a wide coverage grammar for English. pages 922–928. Association for Computational Linguistics. J. Goodman. 2003. Efficient parsing of DOP with PCFG-reductions. Bod et al. 2003. Saul Gorn. 1965. Explicit definitions and linguistic dominoes. In Systems and Computer Science, pages 77–115. Rebecca Hwa. 1998. An empirical evaluation of probabilistic lexicalized tree insertion grammars. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, pages 557–563. Association for Computational Linguistics. Aravind K. Joshi and Yves Schabes. 1997. Treeadjoining grammars. In G. Rozenberg and A. Salomaa, editors, Handbook of Formal Languages, volume 3, pages 69–124. Springer. Aravind K Joshi. 1985. Tree adjoining grammars: How much context-sensitivity is required to provide reasonable structural descriptions? University of Pennsylvania. Aravind K Joshi. 1987. An introduction to tree adjoining grammars. Mathematics of Language, pages 87–115. Dan Klein and Christopher D Manning. 2003. Accurate unlexicalized parsing. pages 423–430. Association for Computational Linguistics. K. Lari and S. J. Young. 1990. The estimation of stochastic context-free grammars using the insideoutside algorithm. Computer Speech and Language, pages 35–56. Mehryar Mohri and Mark jan Nederhof. 2000. Regular approximation of context-free grammars through transformation. In Robustness in language and speech technology. Slav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. 2006. Learning accurate, compact, and interpretable tree annotation. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, pages 433– 440. Association for Computational Linguistics. Yves Schabes and Richard C. Waters. 1995. Tree insertion grammar: a cubic-time, parsable formalism that lexicalizes context-free grammar without changing the trees produced. Computational Linguistics, (4):479–513. Yves Schabes. 1990. Mathematical and computational aspects of lexicalized grammars. Ph.D. thesis, University of Pennsylvania, Philadelphia, PA, USA. Hiroyuki Shindo, Akinori Fujino, and Masaaki Nagata. 2011. Insertion operator for bayesian tree substitution grammars. pages 206–211. Association for Computational Linguistics. Elif Yamangil and Stuart M. Shieber. 2012. Estimating compact yet rich tree insertion grammars. pages 110–114. Association for Computational Linguistics. 310
2013
30
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 311–321, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Fast and Adaptive Online Training of Feature-Rich Translation Models Spence Green, Sida Wang, Daniel Cer, and Christopher D. Manning Computer Science Department, Stanford University {spenceg,sidaw,danielcer,manning}@stanford.edu Abstract We present a fast and scalable online method for tuning statistical machine translation models with large feature sets. The standard tuning algorithm—MERT—only scales to tens of features. Recent discriminative algorithms that accommodate sparse features have produced smaller than expected translation quality gains in large systems. Our method, which is based on stochastic gradient descent with an adaptive learning rate, scales to millions of features and tuning sets with tens of thousands of sentences, while still converging after only a few epochs. Large-scale experiments on Arabic-English and Chinese-English show that our method produces significant translation quality gains by exploiting sparse features. Equally important is our analysis, which suggests techniques for mitigating overfitting and domain mismatch, and applies to other recent discriminative methods for machine translation. 1 Introduction Sparse, overlapping features such as words and ngram contexts improve many NLP systems such as parsers and taggers. Adaptation of discriminative learning methods for these types of features to statistical machine translation (MT) systems, which have historically used idiosyncratic learning techniques for a few dense features, has been an active research area for the past half-decade. However, despite some research successes, feature-rich models are rarely used in annual MT evaluations. For example, among all submissions to the WMT and IWSLT 2012 shared tasks, just one participant tuned more than 30 features (Hasler et al., 2012a). Slow uptake of these methods may be due to implementation complexities, or to practical difficulties of configuring them for specific translation tasks (Gimpel and Smith, 2012; Simianer et al., 2012, inter alia). We introduce a new method for training featurerich MT systems that is effective yet comparatively easy to implement. The algorithm scales to millions of features and large tuning sets. It optimizes a logistic objective identical to that of PRO (Hopkins and May, 2011) with stochastic gradient descent, although other objectives are possible. The learning rate is set adaptively using AdaGrad (Duchi et al., 2011), which is particularly effective for the mixture of dense and sparse features present in MT models. Finally, feature selection is implemented as efficient L1 regularization in the forward-backward splitting (FOBOS) framework (Duchi and Singer, 2009). Experiments show that our algorithm converges faster than batch alternatives. To learn good weights for the sparse features, most algorithms—including ours—benefit from more tuning data, and the natural source is the training bitext. However, the bitext presents two problems. First, it has a single reference, sometimes of lower quality than the multiple references in tuning sets from MT competitions. Second, large bitexts often comprise many text genres (Haddow and Koehn, 2012), a virtue for classical dense MT models but a curse for high dimensional models: bitext tuning can lead to a significant domain adaptation problem when evaluating on standard test sets. Our analysis separates and quantifies these two issues. We conduct large-scale translation quality experiments on Arabic-English and Chinese-English. As baselines we use MERT (Och, 2003), PRO, and the Moses (Koehn et al., 2007) implementation of k-best MIRA, which Cherry and Foster (2012) recently showed to work as well as online MIRA (Chiang, 2012) for feature-rich models. The first experiment uses standard tuning and test sets from the NIST OpenMT competitions. The second experiment uses tuning and test sets sampled from the large bitexts. The new method yields significant improvements in both experiments. Our code is included in the Phrasal (Cer et al., 2010) toolkit, which is freely available. 311 2 Adaptive Online Algorithms Machine translation is an unusual machine learning setting because multiple correct translations exist and decoding is comparatively expensive. When we have a large feature set and therefore want to tune on a large data set, batch methods are infeasible. Online methods can converge faster, and in practice they often find better solutions (Liang and Klein, 2009; Bottou and Bousquet, 2011, inter alia). Recall that stochastic gradient descent (SGD), a fundamental online method, updates weights w according to wt = wt−1 −η∇ℓt(wt−1) (1) with loss function1 ℓt(w) of the tth example, (sub)gradient of the loss with respect to the parameters ∇ℓt(wt−1), and learning rate η. SGD is sensitive to the learning rate η, which is difficult to set in an MT system that mixes frequent “dense” features (like the language model) with sparse features (e.g., for translation rules). Furthermore, η applies to each coordinate in the gradient, an undesirable property in MT where good sparse features may fire very infrequently. We would instead like to take larger steps for sparse features and smaller steps for dense features. 2.1 AdaGrad AdaGrad is a method for setting an adaptive learning rate that comes with good theoretical guarantees. The theoretical improvement over SGD is most significant for high-dimensional, sparse features. AdaGrad makes the following update: wt = wt−1 −ηΣ1/2 t ∇ℓt(wt−1) (2) Σ−1 t = Σ−1 t−1 + ∇ℓt(wt−1)∇ℓt(wt−1)⊤ = t X i=1 ∇ℓi(wi−1)∇ℓi(wi−1)⊤ (3) A diagonal approximation to Σ can be used for a high-dimensional vector wt. In this case, AdaGrad is simple to implement and computationally cheap. Consider a single dimension j, and let scalars vt = wt,j, gt = ∇jℓt(wt−1), Gt = Pt i=1 g2 i , then the update rule is vt = vt−1 −η G−1/2 t gt (4) Gt = Gt−1 + g2 t (5) Compared to SGD, we just need to store Gt = Σ−1 t,jj for each dimension j. 1We specify the loss function for MT in section 3.1. 2.2 Prior Online Algorithms in MT AdaGrad is related to two previous online learning methods for MT. MIRA Chiang et al. (2008) described an adaption of MIRA (Crammer et al., 2006) to MT. MIRA makes the following update: wt = arg min w 1 2η∥w −wt−1∥2 2 + ℓt(w) (6) The first term expresses conservativity: the weight should change as little as possible based on a single example, ensuring that it is never beneficial to overshoot the minimum. The relationship to SGD can be seen by linearizing the loss function ℓt(w) ≈ℓt(wt−1) + (w − wt−1)⊤∇ℓt(wt−1) and taking the derivative of (6). The result is exactly (1). AROW Chiang (2012) adapted AROW (Crammer et al., 2009) to MT. AROW models the current weight as a Gaussian centered at wt−1 with covariance Σt−1, and does the following update upon seeing training example xt: wt, Σt = arg min w,Σ 1 ηDKL(N(w, Σ)||N(wt−1, Σt−1)) + ℓt(w) + 1 2ηx⊤ t Σxt (7) The KL-divergence term expresses a more general, directionally sensitive conservativity. Ignoring the third term, the Σ that minimizes the KL is actually Σt−1. As a result, the first two terms of (7) generalize MIRA so that we may be more conservative in some directions specified by Σ. To see this, we can write out the KL-divergence between two Gaussians in closed form, and observe that the terms involving w do not interact with the terms involving Σ: wt = arg min w 1 2η(w −wt−1)⊤Σ−1 t−1(w −wt−1) + ℓt(w) (8) Σt = arg min Σ 1 2η log |Σt−1| |Σ|  + 1 2ηTr Σ−1 t−1Σ  + 1 2ηx⊤ t Σxt (9) The third term in (7), called the confidence term, gives us adaptivity, the notion that we should have smaller variance in the direction v as more data xt 312 is seen in direction v. For example, if Σ is diagonal and xt are indicator features, the confidence term then says that the weight for a rarer feature should have more variance and vice-versa. Recall that for generalized linear models ∇ℓt(w) ∝xt; if we substitute xt = αt∇ℓt(w) into (9), differentiate and solve, we get: Σ−1 t = Σ−1 t−1 + xtx⊤ t = Σ−1 0 + t X i=1 α2 i ∇ℓi(wi−1)∇ℓi(wi−1)⊤ (10) The precision Σ−1 t generally grows as more data is seen. Frequently updated features receive an especially high precision, whereas the model maintains large variance for rarely seen features. If we substitute (10) into (8), linearize the loss ℓt(w) as before, and solve, then we have the linearized AROW update wt = wt−1 −ηΣt∇ℓt(wt−1) (11) which is also an adaptive update with per-coordinate learning rates specified by Σt (as opposed to Σ1/2 t in AdaGrad). 2.3 Comparing AdaGrad, MIRA, AROW Compare (3) to (10) and observe that if we set Σ−1 0 = 0 and αt = 1, then the only difference between the AROW update (11) and the AdaGrad update (2) is a square root. Under a constant gradient, AROW decays the step size more aggressively (1/t) compared to AdaGrad (1/ √ t), and it is sensitive to the specification of Σ−1 0 . Informally, SGD can be improved in the conservativity direction using MIRA so the updates do not overshoot. Second, SGD can be improved in the adaptivity direction using AdaGrad where the decaying stepsize is more robust and the adaptive stepsize allows better weight updates to features differing in sparsity and scale. Finally, AROW combines both adaptivity and conservativity. For MT, adaptivity allows us to deal with mixed dense/sparse features effectively without specific normalization. Why do we choose AdaGrad over AROW? MIRA/AROW requires selecting the loss function ℓ(w) so that wt can be solved in closed-form, by a quadratic program (QP), or in some other way that is better than linearizing. This usually means choosing a hinge loss. On the other hand, AdaGrad/linearized AROW only requires that the gradient of the loss function can be computed efficiently. Algorithm 1 Adaptive online tuning for MT. Require: Tuning set {fi, e1:k i }i=1:M 1: Set w0 = 0 2: Set t = 1 3: repeat 4: for i in 1 . . . M in random order do 5: Decode n-best list Ni for fi 6: Sample pairs {dj,+, dj,−}j=1:s from Ni 7: Compute Dt = {φ(dj,+) −φ(dj,−)}j=1:s 8: Set gt = ∇ℓ(Dt; wt−1)} 9: Set Σ−1 t = Σ−1 t−1 + gtg⊤ t ▷Eq. (3) 10: Update wt = wt−1 −ηΣ1/2 t gt ▷Eq. (2) 11: Regularize wt ▷Eq. (15) 12: Set t = t + 1 13: end for 14: until convergence Linearized AROW, however, is less robust than AdaGrad empirically2 and lacks known theoretical guarantees. Finally, by using AdaGrad, we separate adaptivity from conservativity. Our experiments suggest that adaptivity is actually more important. 3 Adaptive Online MT Algorithm 1 shows the full algorithm introduced in this paper. AdaGrad (lines 9–10) is a crucial piece, but the loss function, regularization technique, and parallelization strategy described in this section are equally important in the MT setting. 3.1 Pairwise Logistic Loss Function Algorithm 1 lines 5–8 describe the gradient computation. We cast MT tuning as pairwise ranking (Herbrich et al., 1999, inter alia), which Hopkins and May (2011) applied to MT. The pairwise approach results in simple, convex loss functions suitable for online learning. The idea is that for any two derivations, the ranking predicted by the model should be consistent with the ranking predicted by a gold sentence-level metric G like BLEU+1 (Lin and Och, 2004). Consider a single source sentence f with associated references e1:k. Let d be a derivation in an n-best list of f that has the target e = e(d) and the feature map φ(d). Let M(d) = w · φ(d) be the model score. For any derivation d+ that is better than d−under G, we desire pairwise agreement such that G  e(d+), e1:k > G  e(d−), e1:k ⇐⇒M(d+) > M(d−) 2According to experiments not reported in this paper. 313 Ensuring pairwise agreement is the same as ensuring w · [φ(d+) −φ(d−)] > 0. For learning, we need to select derivation pairs (d+, d−) to compute difference vectors x+ = φ(d+) −φ(d−). Then we have a 1-class separation problem trying to ensure w · x+ > 0. The derivation pairs are sampled with the algorithm of Hopkins and May (2011). We compute difference vectors Dt = {x1:s + } (Algorithm 1 line 7) from s pairs (d+, d−) for source sentence ft. We use the familiar logistic loss: ℓt(w) = ℓ(Dt, w) = − X x+∈Dt log 1 1 + e−w·x+ (12) Choosing the hinge loss instead of the logistic loss results in the 1-class SVM problem. The 1class separation problem is equivalent to the binary classification problem with x+ = φ(d+) −φ(d−) as positive data and x−= −x+ as negative data, which may be plugged into an existing logistic regression solver. We find that Algorithm 1 works best with minibatches instead of single examples. In line 4 we simply partition the tuning set so that i becomes a mini-batch of examples. 3.2 Updating and Regularization Algorithm 1 lines 9–11 compute the adaptive learning rate, update the weights, and apply regularization. Section 2.1 explained the AdaGrad learning rate computation. To update and regularize the weights we apply the Forward-Backward Splitting (FOBOS) (Duchi and Singer, 2009) framework, which separates the two operations. The two-step FOBOS update is wt−1 2 = wt−1 −ηt−1∇ℓt−1(wt−1) (13) wt = arg min w 1 2∥w −wt−1 2 ∥2 2 + ηt−1r(w) (14) where (13) is just an unregularized gradient descent step and (14) balances the regularization term r(w) with staying close to the gradient step. Equation (14) permits efficient L1 regularization, which is well-suited for selecting good features from exponentially many irrelevant features (Ng, 2004). It is well-known that feature selection is very important for feature-rich MT. For example, simple indicator features like lexicalized re-ordering classes are potentially useful yet bloat the the feature set and, in the worst case, can negatively impact Algorithm 2 “Stale gradient” parallelization method for Algorithm 1. Require: Tuning set {fi, e1:k i }i=1:M 1: Initialize threadpool p1, . . . , pj 2: Set t = 1 3: repeat 4: for i in 1 . . . M in random order do 5: Wait until any thread p is idle 6: Send (fi, e1:k i , t) to p ▷Alg. 1 lines 5–8 7: while ∃p′ done with gradient gt′ do ▷t′ ≤t 8: Update wt = wt−1 −ηgt′ ▷Alg. 1 lines 9–11 9: Set t = t + 1 10: end while 11: end for 12: until convergence search. Some of the features generalize, but many do not. This was well understood in previous work, so heuristic filtering was usually applied (Chiang et al., 2009, inter alia). In contrast, we need only select an appropriate regularization strength λ. Specifically, when r(w) = λ∥w∥1, the closedform solution to (14) is wt = sign(wt−1 2 ) h |wt−1 2 | −ηt−1λ i + (15) where [x]+ = max(x, 0) is the clipping function that in this case sets a weight to 0 when it falls below the threshold ηt−1λ. It is straightforward to adapt this to AdaGrad with diagonal Σ by setting each dimension of ηt−1,j = ηΣ 1 2 t,jj and by taking element-wise products. We find that ∇ℓt−1(wt−1) only involves several hundred active features for the current example (or mini-batch). However, naively following the FOBOS framework requires updating millions of weights. But a practical benefit of FOBOS is that we can do lazy updates on just the active dimensions without any approximations. 3.3 Parallelization Algorithm 1 is inherently sequential like standard online learning. This is undesirable in MT where decoding is costly. We therefore parallelize the algorithm with the “stale gradient” method of Langford et al. (2009) (Algorithm 2). A fixed threadpool of workers computes gradients in parallel and sends them to a master thread, which updates a central weight vector. Crucially, the weight updates need not be applied in order, so synchronization is unnecessary; the workers only idle at the end of an epoch. The consequence is that the update in line 8 of Algorithm 2 is with respect to gradient gt′ with t′ ≤t. Langford et al. (2009) gave convergence results for 314 stale updating, but the bounds do not apply to our setting since we use L1 regularization. Nevertheless, Gimpel et al. (2010) applied this framework to other non-convex objectives and obtained good empirical results. Our asynchronous, stochastic method has practical appeal for MT. During a tuning run, the online method decodes the tuning set under many more weight vectors than a MERT-style batch method. This characteristic may result in broader exploration of the search space, and make the learner more robust to local optima local optima (Liang and Klein, 2009; Bottou and Bousquet, 2011, inter alia). The adaptive algorithm identifies appropriate learning rates for the mixture of dense and sparse features. Finally, large data structures such as the language model (LM) and phrase table exist in shared memory, obviating the need for remote queries. 4 Experiments We built Arabic-English and Chinese-English MT systems with Phrasal (Cer et al., 2010), a phrasebased system based on alignment templates (Och and Ney, 2004). The corpora3 in our experiments (Table 1) derive from several LDC sources from 2012 and earlier. We de-duplicated each bitext according to exact string match, and ensured that no overlap existed with the test sets. We produced alignments with the Berkeley aligner (Liang et al., 2006b) with standard settings and symmetrized via the grow-diag heuristic. For each language we used SRILM (Stolcke, 2002) to estimate 5-gram LMs with modified Kneser-Ney smoothing. We included the monolingual English data and the respective target bitexts. 4.1 Feature Templates The baseline “dense” model contains 19 features: the nine Moses baseline features, the hierarchical lexicalized re-ordering model of Galley and Manning (2008), the (log) count of each rule, and an indicator for unique rules. To the dense features we add three high dimensional “sparse” feature sets. Discrimina3We tokenized the English with packages from the Stanford Parser (Klein and Manning, 2003) according to the Penn Treebank standard (Marcus et al., 1993), the Arabic with the Stanford Arabic segmenter (Green and DeNero, 2012) according to the Penn Arabic Treebank standard (Maamouri et al., 2008), and the Chinese with the Stanford Chinese segmenter (Chang et al., 2008) according to the Penn Chinese Treebank standard (Xue et al., 2005). Bilingual Monolingual Sentences Tokens Tokens Ar-En 6.6M 375M 990M Zh-En 9.3M 538M Table 1: Bilingual and monolingual corpora used in these experiments. The monolingual English data comes from the AFP and Xinhua sections of English Gigaword 4 (LDC2009T13). tive phrase table (PT): indicators for each rule in the phrase table. Alignments (AL): indicators for phrase-internal alignments and deleted (unaligned) source words. Discriminative reordering (LO): indicators for eight lexicalized reordering classes, including the six standard monotone/swap/discontinuous classes plus the two simpler Moses monotone/non-monotone classes. 4.2 Tuning Algorithms The primary baseline is the dense feature set tuned with MERT (Och, 2003). The Phrasal implementation uses the line search algorithm of Cer et al. (2008), uniform initialization, and 20 random starting points.4 We tuned according to BLEU-4 (Papineni et al., 2002). We built high dimensional baselines with two different algorithms. First, we tuned with batch PRO using the default settings in Phrasal (L2 regularization with σ=0.1). Second, we ran the k-best batch MIRA (kb-MIRA) (Cherry and Foster, 2012) implementation in Moses. We did implement an online version of MIRA, and in small-scale experiments found that the batch variant worked just as well. Cherry and Foster (2012) reported the same result, and their implementation is available in Moses. We ran their code with standard settings. Moses5 also contains the discriminative phrase table implementation of (Hasler et al., 2012b), which is identical to our implementation using Phrasal. Moses and Phrasal accept the same phrase table and LM formats, so we kept those data structures in common. The two decoders also use the same multi-stack beam search (Och and Ney, 2004). For our method, we used uniform initialization, 16 threads, and a mini-batch size of 20. We found that η=0.02 and λ=0.1 worked well on development sets for both languages. To compute the gradients 4Other system settings for all experiments: distortion limit of 5, a maximum phrase length of 7, and an n-best size of 200. 5v1.0 (28 January 2013) 315 Model #features Algorithm Tuning Set MT02 MT03 MT04 MT09 Dense 19 MERT MT06 45.08 51.32 52.26 51.42 48.44 Dense 19 This paper MT06 44.19 51.42 52.52 50.16 48.13 +PT 151k kb-MIRA MT06 42.08 47.25 48.98 47.08 45.64 +PT 23k PRO MT06 44.31 51.06 52.18 50.23 47.52 +PT 50k This paper MT06 50.61 51.71 52.89 50.42 48.74 +PT+AL+LO 109k PRO MT06 44.87 51.25 52.43 50.05 47.76 +PT+AL+LO 242k This paper MT06 57.84 52.45 53.18 51.38 49.37 Dense 19 MERT MT05/6/8 49.63 51.60 52.29 51.73 48.68 +PT+AL+LO 390k This paper MT05/6/8 58.20 53.61 54.99 52.79 49.94 (Chiang, 2012)* 10-20k MIRA MT04/6 – – – – 45.90 (Chiang, 2012)* 10-20k AROW MT04/6 – – – – 47.60 #sentences 728 663 1,075 1,313 Table 2: Ar-En results [BLEU-4 % uncased] for the NIST tuning experiment. The tuning and test sets each have four references. MT06 has 1,717 sentences, while the concatenated MT05/6/8 set has 4,213 sentences. Bold indicates statistical significance relative to the best baseline in each block at p < 0.001; bold-italic at p < 0.05. We assessed significance with the permutation test of Riezler and Maxwell (2005). (*) Chiang (2012) used a similar-sized bitext, but two LMs trained on twice as much monolingual data. Model #features Algorithm Tuning Set MT02 MT03 MT04 Dense 19 MERT MT06 33.90 35.72 33.71 34.26 Dense 19 This paper MT06 32.60 36.23 35.14 34.78 +PT 105k kb-MIRA MT06 29.46 30.67 28.96 30.05 +PT 26k PRO MT06 33.70 36.87 34.62 34.80 +PT 66k This paper MT06 33.90 36.09 34.86 34.73 +PT+AL+LO 148k PRO MT06 34.81 36.31 33.81 34.41 +PT+AL+LO 344k This paper MT06 38.99 36.40 35.07 34.84 Dense 19 MERT MT05/6/8 32.36 35.69 33.83 34.33 +PT+AL+LO 487k This paper MT05/6/8 37.64 37.81 36.26 36.15 #sentences 878 919 1,597 Table 3: Zh-En results [BLEU-4 % uncased] for the NIST tuning experiment. MT05/6/8 has 4,103 sentences. OpenMT 2009 did not include Zh-En, hence the asymmetry with Table 2. we sampled 15 derivation pairs for each tuning example and scored them with BLEU+1. 4.3 NIST OpenMT Experiment The first experiment evaluates our algorithm when tuning and testing on standard test sets, each with four references. When we add features, our algorithm tends to overfit to a standard-sized tuning set like MT06. We thus concatenated MT05, MT06, and MT08 to create a larger tuning set. Table 2 shows the Ar-En results. Our algorithm is competitive with MERT in the low dimensional “dense” setting, and compares favorably to PRO with the PT feature set. PRO does not benefit from additional features, whereas our algorithm improves with both additional features and data. The underperformance of kb-MIRA may result from a difference between Moses and Phrasal: Moses MERT achieves only 45.62 on MT09. Moses PRO with the PT feature set is slightly worse, e.g., 44.52 on MT09. Nevertheless, kb-MIRA does not improve significantly over MERT, and also selects an unnecessarily large model. The full feature set PT+AL+LO does help. With the PT feature set alone, our algorithm tuned on MT05/6/8 scores well below the best model, e.g. 316 Model #features Algorithm Tuning Set #refs bitext5k-test MT04 Dense 19 MERT MT06 45.08 4 39.28 51.42 +PT 72k This paper MT05/6/8 51.29 4 39.50 50.60 +PT 79k This paper bitext5k 44.79 1 43.85 45.73 +PT+AL+LO 647k This paper bitext15k 45.68 1 43.93 45.24 Table 4: Ar-En results [BLEU-4 % uncased] for the bitext tuning experiment. Statistical significance is relative to the Dense baseline. We include MT04 for comparison to the NIST genre. Model #features Algorithm Tuning Set #refs bitext5k-test MT04 Dense 19 MERT MT06 33.90 4 33.44 34.26 +PT 97k This paper MT05/6/8 34.45 4 35.08 35.19 +PT 67k This paper bitext5k 36.26 1 36.01 33.76 +PT+AL+LO 536k This paper bitext15k 37.57 1 36.30 34.05 Table 5: Zh-En results [BLEU-4 % uncased] for the bitext tuning experiment. 48.56 BLEU on MT09. For Ar-En, our algorithm thus has the desirable property of benefiting from more and better features, and more data. Table 3 shows Zh-En results. Somewhat surprisingly our algorithm improves over MERT in the dense setting. When we add the discriminative phrase table, our algorithm improves over kbMIRA, and over batch PRO on two evaluation sets. With all features and the MT05/6/8 tuning set, we improve significantly over all other models. PRO learns a smaller model with the PT+AL+LO feature set which is surprising given that it applies L2 regularization (AdaGrad uses L1). We speculate that this may be an consequence of stochastic learning. Our algorithm decodes each example with a new weight vector, thus exploring more of the search space for the same tuning set. 4.4 Bitext Tuning Experiment Tables 2 and 3 show that adding tuning examples improves translation quality. Nevertheless, even the larger tuning set is small relative to the bitext from which rules were extracted. He and Deng (2012) and Simianer et al. (2012) showed significant translation quality gains by tuning on the bitext. However, their bitexts matched the genre of their test sets. Our bitexts, like those of most large-scale systems, do not. Domain mismatch matters for the dense feature set (Haddow and Koehn, 2012). We show that it also matters for feature-rich MT. Before aligning each bitext, we randomly sampled and sequestered 5k and 15k sentence tuning sets, and a 5k test set. We prevented overlap beDA DB |A| |B| |A ∩B| MT04 MT06 70k 72k 5.9k MT04 MT568 70k 96k 7.6k MT04 bitext5k 70k 67k 4.4k MT04 bitext15k 70k 310k 10.5k 5ktest bitext5k 82k 67k 5.6k 5ktest bitext15k 82k 310k 14k Table 6: Number of overlapping phrase table (+PT) features on various Zh-En dataset pairs. tween the tuning sets and the test set. We then tuned a dense model with MERT on MT06, and feature-rich models on both MT05/6/8 and the bitext tuning set. Table 4 shows the Ar-En results. When tuned on bitext5k the translation quality gains are significant for bitext5k-test relative to tuning on MT05/6/8, which has multiple references. However, the bitext5k models do not generalize as well to the NIST evaluation sets as represented by the MT04 result. Table 5 shows similar trends for Zh-En. 5 Analysis 5.1 Feature Overlap Analysis How many sparse features appear in both the tuning and test sets? In Table 6, A is the set of phrase table features that received a non-zero weight when tuned on dataset DA (same for B). Column DA lists several Zh-En test sets used and column DB lists tuning sets. Our experiments showed that tuning on MT06 generalizes better to MT04 than tuning 317 on bitext5k, whereas tuning on bitext5k generalizes better to bitext5k-test than tuning on MT06. These trends are consistent with the level of feature overlap. Phrase table features in A ∩B are overwhelmingly short, simple, and correct phrases, suggesting L1 regularization is effective for feature selection. It is also important to balance the number of features with how well weights can be learned for those features, as tuning on bitext15k produced higher coverage for MT04 but worse generalization than tuning on MT06. 5.2 Domain Adaptation Analysis To understand the domain adaptation issue we compared the non-zero weights in the discriminative phrase table (PT) for Ar-En models tuned on bitext5k and MT05/6/8. Table 7 illustrates a statistical idiosyncrasy in the data for the American and British spellings of program/programme. The mass is concentrated along the diagonal, probably because MT05/6/8 was prepared by NIST, an American agency, while the bitext was collected from many sources including Agence France Presse. Of course, this discrepancy is consequential for both dense and feature-rich models. However, we observe that the feature-rich models fit the tuning data more closely. For example, the MT05/6/8 model learns rules like l.×A KQK. áÒ ’JK →program includes, l.×A KQK. →program of, and l.×A KQ.Ë@ è Y ¯A K → program window. Crucially, it does not learn the basic rule l.×A KQK. →program. In contrast, the bitext5k model contains basic rules such l.×A KQK. →programme, l.×A KQ.Ë@ @ Yë →this programme, and l.×A KQ.Ë@ ½Ë X →that programme. It also contains more elaborate rules such as l.×A KQ.Ë@ HA® ® K I KA¿ →programme expenses were and éËñë AÖÏ@ éJ KA ’ ®Ë@ HCgQË@ l.×@QK. →manned space flight programmes. We observed similar trends for ‘defense/defence’, ‘analyze/analyse’, etc. This particular genre problem could be addressed with language-specific pre-processing, but our system solves it in a data-driven manner. 5.3 Re-ordering Analysis We also analyzed re-ordering differences. Arabic matrix clauses tend to be verb-initial, meaning that the subject and verb must be swapped when translating to English. To assess re-ordering differences— if any—between the dense and feature-rich models, we selected all MT09 segments that began with one # bitext5k # MT05/6/8 programme 185 0 program 19 449 PT rules w/ programme 353 79 PT rules w/ program 9 31 Table 7: Top: comparison of token counts in two Ar-En tuning sets for programme and program. Bottom: rule counts in the discriminative phrase table (PT) for models tuned on the two tuning sets. Both spellings correspond to the Arabic l.×A KQK.. of seven common verbs: ÈA¯ qaal ‘said’, hQå• SrH ‘declared’,PA ƒ @ ashaar ‘indicated’, àA¿ kaan ‘was’, Q» X dhkr ‘commented’, ¬A “ @ aDaaf ‘added’, áÊ« @ acln ‘announced’. We compared the output of the MERT Dense model to our method with the full feature set, both tuned on MT06. Of the 208 source segments, 32 of the translation pairs contained different word order in the matrix clause. Our featurerich model was correct 18 times (56.3%), Dense was correct 4 times (12.5%), and neither method was correct 10 times (31.3%). (1) ref: lebanese prime minister , fuad siniora , announced a. and lebanese prime minister fuad siniora that b. the lebanese prime minister fouad siniora announced (2) ref: the newspaper and television reported a. she said the newspaper and television b. television and newspaper said In (1) the dense model (1a) drops the verb while the feature-rich model correctly re-orders and inserts it after the subject (1b). The coordinated subject in (2) becomes an embedded subject in the dense output (2a). The feature-rich model (2b) performs the correct re-ordering. 5.4 Runtime Comparison Table 8 compares our method to standard implementations of the other algorithms. MERT parallelizes easily but runtime increases quadratically with nbest list size. PRO runs (single-threaded) L-BFGS to convergence on every epoch, a potentially slow procedure for the larger feature set. Moreover, both 318 epochs min. MERT Dense 22 180 PRO +PT 25 35 kb-MIRA* +PT 26 25 This paper +PT 10 10 PRO +PT+AL+LO 13 150 This paper +PT+AL+LO 5 15 Table 8: Epochs to convergence (“epochs”) and approximate runtime per epoch in minutes (“min.”) for selected Zh-En experiments tuned on MT06. All runs executed on the same dedicated system with the same number of threads. (*) Moses and kb-MIRA are written in C++, while all other rows refer to Java implementations in Phrasal. the Phrasal and Moses PRO implementations use L2 regularization, which regularizes every weight on every update. kb-MIRA makes multiple passes through the n-best lists during each epoch. The Moses implementation parallelizes decoding but weight updating is sequential. The core of our method is an inner product between the adaptive learning rate vector and the gradient. This is easy to implement and is very fast even for large feature sets. Since we applied lazy regularization, this inner product usually involves hundred-dimensional vectors. Finally, our method does not need to accumulate n-best lists, a practice that slows down the other algorithms. 6 Related Work Our work relates most closely to that of Hasler et al. (2012b), who tuned models containing both sparse and dense features with Moses. A discriminative phrase table helped them improve slightly over a dense, online MIRA baseline, but their best results required initialization with MERT-tuned weights and re-tuning a single, shared weight for the discriminative phrase table with MERT. In contrast, our algorithm learned good high dimensional models from a uniform starting point. Chiang (2012) adapted AROW to MT and extended previous work on online MIRA (Chiang et al., 2008; Watanabe et al., 2007). It was not clear if his improvements came from the novel Hope/Fear search, the conservativity gain from MIRA/AROW by solving the QP exactly, adaptivity, or sophisticated parallelization. In contrast, we show that AdaGrad, which ignores conservativity and only capturing adaptivity, is sufficient. Simianer et al. (2012) investigated SGD with a pairwise perceptron objective. Their best algorithm used iterative parameter mixing (McDonald et al., 2010), which we found to be slower than the stale gradient method in section 3.3. They regularized once at the end of each epoch, whereas we regularized each weight update. An empirical comparison of these two strategies would be an interesting future contribution. Watanabe (2012) investigated SGD and even randomly selected pairwise samples as we did. He considered both softmax and hinge losses, observing better results with the latter, which solves a QP. Their parallelization strategy required a line search at the end of each epoch. Many other discriminative techniques have been proposed based on: ramp loss (Gimpel, 2012); hinge loss (Cherry and Foster, 2012; Haddow et al., 2011; Arun and Koehn, 2007); maximum entropy (Xiang and Ittycheriah, 2011; Ittycheriah and Roukos, 2007; Och and Ney, 2002); perceptron (Liang et al., 2006a); and structured SVM (Tillmann and Zhang, 2006). These works use radically different experimental setups, and to our knowledge only (Cherry and Foster, 2012) and this work compare to at least two high dimensional baselines. Broader comparisons, though time-intensive, could help differentiate these methods. 7 Conclusion and Outlook We introduced a new online method for tuning feature-rich translation models. The method is faster per epoch than MERT, scales to millions of features, and converges quickly. We used efficient L1 regularization for feature selection, obviating the need for the feature scaling and heuristic filtering common in prior work. Those comfortable with implementing vanilla SGD should find our method easy to implement. Even basic discriminative features were effective, so we believe that our work enables fresh approaches to more sophisticated MT feature engineering. Acknowledgments We thank John DeNero for helpful comments on an earlier draft. The first author is supported by a National Science Foundation Graduate Research Fellowship. We also acknowledge the support of the Defense Advanced Research Projects Agency (DARPA) Broad Operational Language Translation (BOLT) program through IBM. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the view of the DARPA or the US government. 319 References A. Arun and P. Koehn. 2007. Online learning methods for discriminative training of phrase based statistical machine translation. In MT Summit XI. L. Bottou and O. Bousquet. 2011. The tradeoffs of large scale learning. In Optimization for Machine Learning, pages 351–368. MIT Press. D. Cer, D. Jurafsky, and C. D. Manning. 2008. Regularization and search for minimum error rate training. In WMT. D. Cer, M. Galley, D. Jurafsky, and C. D. Manning. 2010. Phrasal: A statistical machine translation toolkit for exploring new model features. In HLTNAACL, Demonstration Session. P-C. Chang, M. Galley, and C. D. Manning. 2008. Optimizing Chinese word segmentation for machine translation performance. In WMT. C. Cherry and G. Foster. 2012. Batch tuning strategies for statistical machine translation. In HLT-NAACL. D. Chiang, Y. Marton, and P. Resnik. 2008. Online large-margin training of syntactic and structural translation features. In EMNLP. D. Chiang, K. Knight, and W. Wang. 2009. 11,001 new features for statistical machine translation. In HLT-NAACL. D. Chiang. 2012. Hope and fear for discriminative training of statistical translation models. JMLR, 13:1159–1187. K. Crammer, O. Dekel, J. Keshet, S. Shalev-Shwartz, and Y. Singer. 2006. Online passive-aggressive algorithms. JMLR, 7:551–585. K. Crammer, A. Kulesza, and M. Dredze. 2009. Adaptive regularization of weight vectors. In NIPS. J. Duchi and Y. Singer. 2009. Efficient online and batch learning using forward backward splitting. JMLR, 10:2899–2934. J. Duchi, E. Hazan, and Y. Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. JMLR, 12:2121–2159. M. Galley and C. D. Manning. 2008. A simple and effective hierarchical phrase reordering model. In EMNLP. K. Gimpel and N. A. Smith. 2012. Structured ramp loss minimization for machine translation. In HLTNAACL. K. Gimpel, D. Das, and N. A. Smith. 2010. Distributed asynchronous online learning for natural language processing. In CoNLL. K. Gimpel. 2012. Discriminative Feature-Rich Modeling for Syntax-Based Machine Translation. Ph.D. thesis, Language Technologies Institute, Carnegie Mellon University. S. Green and J. DeNero. 2012. A class-based agreement model for generating accurately inflected translations. In ACL. B. Haddow and P. Koehn. 2012. Analysing the effect of out-of-domain data on SMT systems. In WMT. B. Haddow, A. Arun, and P. Koehn. 2011. SampleRank training for phrase-based machine translation. In WMT. E. Hasler, P. Bell, A. Ghoshal, B. Haddow, P. Koehn, F. McInnes, et al. 2012a. The UEDIN systems for the IWSLT 2012 evaluation. In IWSLT. E. Hasler, B. Haddow, and P. Koehn. 2012b. Sparse lexicalised features and topic adaptation for SMT. In IWSLT. X. He and L. Deng. 2012. Maximum expected BLEU training of phrase and lexicon translation models. In ACL. R. Herbrich, T. Graepel, and K. Obermayer. 1999. Support vector learning for ordinal regression. In ICANN. M. Hopkins and J. May. 2011. Tuning as ranking. In EMNLP. A. Ittycheriah and S. Roukos. 2007. Direct translation model 2. In HLT-NAACL. D. Klein and C. D. Manning. 2003. Accurate unlexicalized parsing. In ACL. P. Koehn, H. Hoang, A. Birch, C. Callison-Burch, M. Federico, N. Bertoldi, et al. 2007. Moses: Open source toolkit for statistical machine translation. In ACL, Demonstration Session. J. Langford, A. J. Smola, and M. Zinkevich. 2009. Slow learners are fast. In NIPS. P. Liang and D. Klein. 2009. Online EM for unsupervised models. In HLT-NAACL. P. Liang, A. Bouchard-Côté, D. Klein, and B. Taskar. 2006a. An end-to-end discriminative approach to machine translation. In ACL. P. Liang, B. Taskar, and D. Klein. 2006b. Alignment by agreement. In NAACL. C.-Y. Lin and F. J. Och. 2004. ORANGE: a method for evaluating automatic evaluation metrics for machine translation. In COLING. M. Maamouri, A. Bies, and S. Kulick. 2008. Enhancing the Arabic Treebank: A collaborative effort toward new annotation guidelines. In LREC. 320 M. Marcus, M. A. Marcinkiewicz, and B. Santorini. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19:313–330. R. McDonald, K. Hall, and G. Mann. 2010. Distributed training strategies for the structured perceptron. In NAACL-HLT. A. Y. Ng. 2004. Feature selection, L1 vs. L2 regularization, and rotational invariance. In ICML. F. J. Och and H. Ney. 2002. Discriminative training and maximum entropy models for statistical machine translation. In ACL. F. J. Och and H. Ney. 2004. The alignment template approach to statistical machine translation. Computational Linguistics, 30(4):417–449. F. J. Och. 2003. Minimum error rate training for statistical machine translation. In ACL. K. Papineni, S. Roukos, T. Ward, and W. Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In ACL. S. Riezler and J. T. Maxwell. 2005. On some pitfalls in automatic evaluation and significance testing in MT. In ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization (MTSE). P. Simianer, S. Riezler, and C. Dyer. 2012. Joint feature selection in distributed stochastic learning for largescale discriminative training in SMT. In ACL. A Stolcke. 2002. SRILM—an extensible language modeling toolkit. In ICSLP. C. Tillmann and T. Zhang. 2006. A discriminative global training algorithm for statistical MT. In ACLCOLING. T. Watanabe, J. Suzuki, H. Tsukada, and H. Isozaki. 2007. Online large-margin training for statistical machine translation. In EMNLP-CoNLL. T. Watanabe. 2012. Optimized online rank learning for machine translation. In HLT-NAACL. Association for Computational Linguistics. B. Xiang and A. Ittycheriah. 2011. Discriminative feature-tied mixture modeling for statistical machine translation. In ACL-HLT. N. Xue, F. Xia, F. Chiou, and M. Palmer. 2005. The Penn Chinese Treebank: Phrase structure annotation of a large corpus. Natural Language Engineering, 11(2):207–238. 321
2013
31
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 322–332, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Advancements in Reordering Models for Statistical Machine Translation Minwei Feng and Jan-Thorsten Peter and Hermann Ney Human Language Technology and Pattern Recognition Computer Science Department, RWTH Aachen University, Aachen, Germany <surname>@cs.rwth-aachen.de Abstract In this paper, we propose a novel reordering model based on sequence labeling techniques. Our model converts the reordering problem into a sequence labeling problem, i.e. a tagging task. Results on five Chinese-English NIST tasks show that our model improves the baseline system by 1.32 BLEU and 1.53 TER on average. Results of comparative study with other seven widely used reordering models will also be reported. 1 Introduction The systematic word order difference between two languages poses a challenge for current statistical machine translation (SMT) systems. The system has to decide in which order to translate the given source words. This problem is known as the reordering problem. As shown in (Knight, 1999), if arbitrary reordering is allowed, the search problem is NP-hard. Many ideas have been proposed to address the reordering problem. Within the phrase-based SMT framework there are mainly three stages where improved reordering could be integrated: In the preprocessing: the source sentence is reordered by heuristics, so that the word order of source and target sentences is similar. (Wang et al., 2007) use manually designed rules to reorder parse trees of the source sentences. Based on shallow syntax, (Zhang et al., 2007) use rules to reorder the source sentences on the chunk level and provide a source-reordering lattice instead of a single reordered source sentence as input to the SMT system. Designing rules to reorder the source sentence is conceptually clear and usually easy to implement. In this way, syntax information can be incorporated into phrase-based SMT systems. However, one disadvantage is that the reliability of the rules is often language pair dependent. In the decoder: we can add constraints or models into the decoder to reward good reordering options or penalize bad ones. For reordering constraints, early work includes ITG constraints (Wu, 1995) and IBM constraints (Berger et al., 1996). (Zens and Ney, 2003) did comparative study over different reordering constraints. This paper focuses on reordering models. For reordering models, we can further roughly divide the existing methods into three genres: • The reordering is a classification problem. The classifier will make decision on next phrase’s relative position with current phrase. The classifier can be trained with maximum likelihood like Moses lexicalized reordering (Koehn et al., 2007) and hierarchical lexicalized reordering model (Galley and Manning, 2008) or be trained under maximum entropy framework (Zens and Ney, 2006). • The reordering is a decoding order problem. (Mari˜no et al., 2006) present a translation model that constitutes a language model of a sort of bilanguage composed of bilingual units. From the reordering point of view, the idea is that the correct reordering is a suitable order of translation units. (Feng et al., 2010) present a simpler version of (Mari˜no et al., 2006)’s model which utilize only source words to model the decoding order. • The reordering can be solved by outside heuristics. We can put human knowledge into the decoder. For example, the simple jump model using linear distance tells the decoder that usually the long range reordering should be avoided. (Cherry, 2008) uses information from dependency trees to make the decoding process keep syntactic cohesion. (Feng et al., 2012) present a method that utilizes predicate-argument structures from semantic role labeling results as soft constraints. In the reranking framework: in principle, all 322 the models in previous category can be used in the reranking framework, because in the reranking we have all the information (source and target words/phrases, alignment) about the translation process. (Och et al., 2004) describe the use of syntactic features in the rescoring step. However, they report the syntactic features contribute very small gains. One disadvantage of carrying out reordering in reranking is the representativeness of the N-best list is often a question mark. In this paper, we propose a novel tagging style reordering model which is under the category “The reordering is a decoding order problem”. Our model converts the decoding order problem into a sequence labeling problem, i.e. a tagging task. The remainder of this paper is organized as follows: Section 2 introduces the basement of this research: the principle of statistical machine translation. Section 3 describes the proposed model. Section 4 briefly describes several reordering models with which we compare our method. Section 5 provides the experimental configuration and results. Conclusion will be given in Section 6. 2 Translation System Overview In statistical machine translation, we are given a source language sentence fJ 1 = f1 . . . fj . . . fJ. The objective is to translate the source into a target language sentence eI 1 = e1 . . . ei . . . eI. The strategy is to choose the target sentence with the highest probability among all others: ˆe ˆI i = arg max I,eI 1 {Pr(eI 1|fJ 1 )} (1) We model Pr(eI 1|fJ 1 ) directly using a log-linear combination of several models (Och and Ney, 2002): Pr(eI 1|fJ 1 ) = exp  M P m=1 λmhm(eI 1, fJ 1 )  P I′,e′ I′ 1 exp  M P m=1 λmhm(e ′I′ 1 , fJ 1 )  (2) The denominator is to make the Pr(eI 1|fJ 1 ) to be a probability distribution and it depends only on the source sentence fJ 1 . For search, the decision rule is simply: ˆe ˆI i = arg max I,eI 1 n M X m=1 λmhm(eI 1, fJ 1 ) o (3) The model scaling factors λM 1 are trained with Minimum Error Rate Training (MERT). In this paper, the phrase-based machine translation system is utilized (Och et al., 1999; Zens et al., 2002; Koehn et al., 2003). 3 Tagging-style Reordering Model In this section, we describe the proposed novel model. First we will describe the training process. Then we explain how to use the model in the decoder. 3.1 Modeling Figure 1 shows the modeling steps. The first step is word alignment training. Figure 1(a) is an example after GIZA++ training. If we regard this alignment as a translation result, i.e. given the source sentence f7 1 , the system translates it into the target sentence e7 1, then the alignment link set {a1 = 3, a3 = 2, a4 = 4, a4 = 5, a5 = 7, a6 = 6, a7 = 6} reveals the decoding process, i.e. the alignment implies the order in which the source words should be translated, e.g. the first generated target word e1 has no alignment, we can regard it as a translation from a NULL source word; then the second generated target word e2 is translated from f3. We reorder the source side of the alignment to get Figure 1(b). Figure 1(b) implies the source sentence decoding sequence information, which is depicted in Figure 1(c). Using this example we describe the strategies we used for special cases in the transformation from Figure 1(b) to Figure 1(c): • ignore the unaligned target word, e.g. e1 • the unaligned source word should follow its preceding word, the unaligned feature is kept with a ∗symbol, e.g. f∗ 2 is after f1 • when one source word is aligned to multiple target words, only keep the alignment that links the source word to the first target word, e.g. f4 is linked to e5 and e6, only f4 −e5 is kept. In other words, we use this strategy to guarantee that every source word appears only once in the source decoding sequence. • when multiple source words are aligned to one target word, put together the source words according to their original relative positions, e.g. e6 is linked to f6 and f7. So in the decoding sequence, f6 is before f7. Now Figure 1(c) shows the original source sentence and its decoding sequence. By using the strategies above, it is guaranteed that the source sentence and its decoding sequence have the ex323 f1 f2 f3 f4 f5 f6 f7 e1 e2 e3 e4 e5 e6 e7 (a) f3 f1 f2 f4 f6 f7 f5 e1 e2 e3 e4 e5 e6 e7 (b) f1 f ∗ 2 f3 f4 f5 f6 f7 f3 f1 f2 f4 f6 f7 f5 (c) f1 f ∗ 2 f3 f4 f5 f6 f7 +1 +1 −2 0 +2 −1 −1 (d) BEGIN-Rmono Unalign Lreorder-Rmono Lmono-Rmono Lmono-Rreorder Lreorder-Rmono END-Lmono f1 f ∗ 2 f3 f4 f5 f6 f7 (e) Figure 1: modeling process illustration. actly same length. Hence the relation can be modeled by a function F(f) which assigns a value for each source word f. Figure 1(d) manifests this function. The positive function values mean that compared to the original position in the source sentence, its position in the decoding sequence should move rightwards. If the function value is 0, the word’s position in original source sentence and its decoding sequence is same. For example, f1 is the first word in the source sentence but it is the second word in the decoding sequence. So its function value is +1 (move rightwards one position). Now Figure 1(d) converts the reordering problem into a sequence labeling or tagging problem. To make the computational cost to a reasonable level, we do a final step simplification in Figure 1(e). Suppose the longest sentence length is 100, then according to Figure 1(d), there are 200 tags (from -99 to +99 plus the unalign tag). As we will see later, this number is too large for our task. We instead design nine tags. For a source word fj in one source sentence fJ 1 , the tag of fj will be one of the following: Unalign fj is an unaligned source word BEGIN-Rmono j = 1 and fj+1 is translated after fj (Rmono for right monotonic) BEGIN-Rreorder j = 1 and fj+1 is translated before fj (Rreorder for right reordered) END-Lmono j = J and fj−1 translated before fj (Lmono for left monotonic) END-Lreorder j = J and fj−1 translated after fj (Lreorder for left reordered) Lmono-Rmono 1 < j < J and fj−1 translated before fj and fj translated before fj+1 Lreorder-Rmono 1 < j < J and fj−1 translated after fj and fj translated before fj+1 Lmono-Rreorder 1 < j < J and fj−1 translated before fj and fj translated after fj+1 Lreorder-Rreorder 1 < j < J and fj−1 translated after fj and fj translated after fj+1 Up to this point, we have converted the reordering problem into a tagging problem with nine tags. The transformation in Figure 1 is conducted for all the sentence pairs in the bilingual training corpus. After that, we have built an “annotated” corpus for the training. For this supervised learning task, we choose the approach conditional random fields (CRFs) (Lafferty et al., 2001; Sutton and Mccallum, 2006; Lavergne et al., 2010) and recurrent neural network (RNN) (Elman, 1990; Jordan, 1990; Lang et al., 1990). For the first method, we adopt the linear-chain CRFs. However, even for the simple linear-chain CRFs, the complexity of learning and inference grows quadratically with respect to the number of output labels and the amount of structural features which are with regard to adjacent pairs of labels. Hence, to make the computational cost as low as possible, two measures have been taken. Firstly, as described above we reduce the number of tags to nine. Secondly, we add source sentence part-ofspeech (POS) tags to the input. For features with window size one to three, both source words and its POS tags are used. For features with window size four and five, only POS tags are used. As the second method, we use recurrent neural network (RNN). RNN is closely related with Multilayer Perceptrons (MLP) (Rumelhart et al., 1986), but the output of one ore more hidden layers is reused as additional inputs for the network in the next time step. This structure allows the RNN to learn whole sequences without restricting itself to a fixed input window. A plain RNN has only access to the previous events in the input sequence. Hence we adopt the bidirectional RNN (BRNN) (Schuster and Paliwal, 1997) which reads the input sequence from both directions before making the prediction. The long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) is applied to 324 counter the effects that long distance dependencies are hard to learn with gradient descent. This is often referred to as vanishing gradient problem (Bengio et al., 1994). 3.2 Decoding Once the model training is finished, we make inference on develop and test corpora which means that we get the labels of the source sentences that need to be translated. In the decoder, we add a new model which checks the labeling consistency when scoring an extended state. During the search, a sentence pair (fJ 1 , eI 1) will be formally splitted into a segmentation SK 1 which consists of K phrase pairs. Each sk = (ik; bk, jk) is a triple consisting of the last position ik of the kth target phrase ˜ek. The start and end position of the kth source phrase ˜fk are bk and jk. Suppose the search state is now extended with a new phrase pair ( ˜fk, ˜ek): ˜fk := fbk . . . fjk and ˜ek := eik−1+1 . . . eik. We have access to the old coverage vector, from which we know if the new phrase’s left neighboring source word fbk−1 and right neighboring source word fjk+1 have been translated. We also have the word alignment within the new phrase pair, which is stored during the phrase extraction process. Based on the old coverage vector and alignment, we can repeat the transformation in Figure 1 to calculate the labels for the new phrase. The added model will then check the consistence between the calculated labels and the labels predicted by the reordering model. The number of source words that have inconsistent labels is the penalty and is then added into the log-linear framework as a new feature. 4 Comparative Study The second part of this paper is comparative study on reordering models. Here we briefly describe those models which will be compared to later. 4.1 Moses lexicalized reordering model A B Figure 2: lexicalized reordering model illustration. Moses (Koehn et al., 2007) contains a wordbased orientation model, which has three types of reordering: (m) monotone order, (s) switch with previous phrase and (d) discontinuous. Figure 2 is an example. The definitions of reordering types are as follows: monotone for current phrase, if a word alignment to the bottom left (point A) exists and there is no word alignment point at the bottom right position (point B) . swap for current phrase, if a word alignment to the bottom right (point B) exists and there is no word alignment point at the bottom left position (point A) . discontinuous all other cases Our implementation is same with the default behavior of Moses lexicalized reordering model. We count how often each extracted phrase pair is found with each of the three reordering types. The add-0.5 smoothing is then applied. Finally, the probability is estimated with maximum likelihood principle. 4.2 Maximum entropy reordering model Figure 3 is an illustration of (Zens and Ney, 2006) . j is the source word position which is aligned to the last target word of the current phrase. j ′ is the last source word position of the current phrase. j ′′ is the source word position which is aligned to the first target word position of the next phrase. (Zens and Ney, 2006) proposed a maximum entropy classifier to predict the orientation of the next phrase given the current phrase. The orientation class cj,j′,j′′ is defined as: cj,j′,j′′=    left, if j ′′<j right, if j ′′>j and j′′ −j′>1 monotone, if j ′′>j and j′′ −j′=1 (4) The orientation probability is modeled in a loglinear framework using a set of N feature functions hn(fJ 1 , eI 1, i, j, cj,j′,j′′), n = 1, . . . , N. The whole model is: pλN 1 (cj,j′,j′′|fJ 1 , eI 1, i, j) = exp( N P n=1 λnhn(fJ 1 ,eI 1,i,j,cj,j′ ,j′′ )) P c′ exp( N P n=1 λnhn(fJ 1 ,eI 1,i,j,c′)) (5) Different features can be used, we use the source and target word features to train the model. 325 Figure 3: phrase orientation: left, right and monotone. j is the source word position aligned to the last target word of current phrase. j ′ is the last source word position of current phrase. j ′′ is the source word position aligned to the first target word position of the next phrase. f1 f2 f3 f4 f5 f6 f7 e1 e2 e3 e4 e5 e6 e7 Figure 4: bilingual LM illustration. The bilingual sequence is e1 , e2 f3 , e3 f1 , e4 f4 , e5 f4 , e6 f6 f7 , e7 f5 . 4.3 Bilingual LM The previous two models belong to “The reordering is a classification problem”. Now we turn to “The reordering is a decoding order problem”. (Mari˜no et al., 2006) implement a translation model using n-grams. In this way, the translation system can take full advantage of the smoothing and consistency provided by standard back-off ngram models. Figure 4 is an example. The interpretation is that given the sentence pair (f7 1 , e7 1) and its alignment, the correct translation order is e1 , e2 f3 , e3 f1 , e4 f4 , e5 f4 , e6 f6 f7 , e7 f5 . Notice the bilingual units have been ordered according to the target side, as the decoder writes the translation in a left-to-right way. Using the example we describe the strategies used for special cases: • keep the unaligned target word, e.g. e1 • remove the unaligned source word, e.g. f2 • when one source word aligned to multiple target words, duplicate the source word for each target word, e.g. e4 f4 , e5 f4 • when multiple source words aligned to one target word, put together the source words for that target word, e.g. e6 f6 f7 After the operation in Figure 4 was done for all bilingual sentence pairs, we get a decoding sequence corpus. We build a 9-gram LM using SRILM toolkit (Stolcke, 2002) with modified Kneser-Ney smoothing. The model is added as an additional feature in Equation (2). To use the bilingual LM, the search state must be augmented to keep the bilingual unit decoding sequence. In search, the bilingual LM is applied similar to the standard target side LM. The bilingual sequence of phrase pairs will be extracted using the same strategy in Figure 4 . Suppose the search state is now extended with a new phrase pair ( ˜f, ˜e). ˜F is the bilingual sequence for the new phrase pair ( ˜f, ˜e) and ˜F i is the ith unit within ˜F. ˜F ′ is the bilingual sequence history for current state. We compute the feature score hbilm( ˜F, ˜F ′) of the extended state as follows: hbilm( ˜F, ˜F ′)=λ· | ˜F| X i=1 log p( ˜F i| ˜F ′, ˜F 1, · · · , ˜F i−1) (6) λ is the scaling factor for this model. | ˜F| is the length of the bilingual decoding sequence. 4.4 Source decoding sequence LM (Feng et al., 2010) present an simpler version of the above bilingual LM where they use only the source side to model the decoding order. The source word decoding sequence in Figure 4 is then f3 , f1 , f2 , f4 , f6 , f7 , f5 . We also build a 9-gram LM based on the source word decoding sequences. The usage of the model is same as bilingual LM. 4.5 Syntactic cohesion model The previous two models belong to “The reordering is a decoding order problem”. Now we turn to “The reordering can be solved by outside heuristics”. (Cherry, 2008) proposed a syntactic cohesion model. The core idea is that the syntactic structure of the source sentence should be preserved during translation. This structure is represented by a source sentence dependency tree. The algorithm is as follows: given the source sentence and its dependency tree, during the translation process, once a hypothesis is extended, check if the source dependency tree contains a subtree T such that: 326 • Its translation is already started (at least one node is covered) • It is interrupted by the new added phrase (at least one word in the new source phrase is not in T) • It is not finished (after the new phrase is added, there is still at least one free node in T) If so, we say this hypothesis violates the subtree T, and the model returns the number of subtrees that this hypothesis violates. 4.6 Semantic cohesion model (Feng et al., 2012) propose two structure features from semantic role labeling (SRL) results. Similar to the previous model, the SRL information is used as soft constraints. During decoding process, the first feature will report how many event layers that one search state violates and the second feature will report the amount of semantic roles that one search state violates. In this paper, the two features have been used together. So when the semantic cohesion model is used, both features will be triggered. 4.7 Tree-based jump model (Wang et al., 2007) present a pre-reordering method for Chinese-English translation task. In Section 3.6 of (Zhang, 2013), instead of doing hard reordering decision, the author uses the rules as soft constraints in the decoder. In this paper, we use the similar method as described in (Zhang, 2013). Our strategy is: firstly, we parse the source sentences to get constituency trees. Then we manipulate the trees using heuristics described by (Wang et al., 2007) . The leaf nodes in the revised tree constitute the reordered source sentence. Finally, in the log-linear framework (Equation 2) a new jump model is added which uses the reordered source sentence to calculate the cost. For example, the original sentence f1f2f3f4f5 is now converted by rules into the new sentence f1f5f3f2f4 . For decoding, we still use the original sentence. Suppose previously translated source phrase is f1 and the current phrase is f5 . Then the standard jump model gives cost qDist = 4 and the new tree-based jump model will return a cost qDist new = 1 . 5 Experiments In this section, we describe the baseline setup, the CRFs training results, the RNN training results and translation experimental results. 5.1 Experimental Setup Our baseline is a phrase-based decoder, which includes the following models: an n-gram targetside language model (LM), a phrase translation model and a word-based lexicon model. The latter two models are used for both directions: p(f|e) and p(e|f). Additionally we use phrase count features, word and phrase penalty. The reordering model for the baseline system is the distancebased jump model which uses linear distance. This model does not have hard limit. We list the important information regarding the experimental setup below. All those conditions have been kept same in this work. • lowercased training data from the GALE task (Table 1, UN corpus not included) alignment trained with GIZA++ • tuning corpus: NIST06 test corpora: NIST02 03 04 05 and 08 • 5-gram LM (1 694 412 027 running words) trained by SRILM toolkit (Stolcke, 2002) with modified Kneser-Ney smoothing training data: target side of bilingual data. • BLEU (Papineni et al., 2001) and TER (Snover et al., 2005) reported all scores calculated in lowercase way. • Wapiti toolkit (Lavergne et al., 2010) used for CRFs; RNN is built by the RNNLIB toolkit. Chinese English Sentences 5 384 856 Running Words 115 172 748 129 820 318 Vocabulary 1 125 437 739 251 Table 1: translation model and LM training data statistics Table 1 contains the data statistics used for translation model and LM. For the reordering model, we take two further filtering steps. Firstly, we delete the sentence pairs if the source sentence length is one. When the source sentence has only one word, the translation will be always monotonic and the reordering model does not need to learn this. Secondly, we delete the sentence pairs if the source sentence contains more than three contiguous unaligned words. When this happens, the sentence pair is usually low quality hence not suitable for learning. The main purpose of the two filtering steps is to further lay down the computational burden. The label distribution is depicted in Figure 5. We can see that most words are monotonic. We then divide the corpus to three parts: 327 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 ·107 BEGIN-Rmono BEGIN-Rreorder END-Lmono END-Lreorder Lmono-Rmono Lmono-Rreorder Lreorder-Rmono Lreorder-Rreorder UNALIGN Amount of Tags Figure 5: Tags distribution illustration. train, validation and test. The source side data statistics for the reordering model training is given in Table 2 (target side has only nine labels). train validation test Sentences 2 973 519 400 000 400 000 Running Words 62 263 295 8 370 361 8 382 086 Vocabulary 454 951 149686 150 007 Table 2: tagging-style model training data statistics 5.2 CRFs Training Results The toolkit Wapiti (Lavergne et al., 2010) is used in this paper. We choose the classical optimization algorithm limited memory BFGS (L-BFGS) (Liu and Nocedal, 1989). For regularization, Wapiti uses both the ℓ1 and ℓ2 penalty terms, yielding the elastic-net penalty of the form ρ1· ∥θ ∥1 +ρ2 2 · ∥θ ∥2 2 (7) In this work, we use as many features as possible because ℓ1 penalty ρ1 ∥θ ∥1 is able to yield sparse parameter vectors, i.e. using a ℓ1 penalty term implicitly performs the feature selection. The computational costs are given here: on a cluster with two AMD Opteron(tm) Processor 6176 (total 24 cores), the training time is about 16 hours, peak memory is around 120G. Several experiments have been done to find the suitable hyperparameter ρ1 and ρ2, we choose the model with lowest error rate on validation corpus for translation experiments. The error rate of the chosen model on test corpus (the test corpus in Table 2) is 25.75% for token error rate and 69.39% for sequence error rate. Table 3 is the feature template we set initially which generates 722 999 637 features. Some examples are given in Table 4. After training 36 902 363 features are kept. 5.3 RNN Training Results We also applied RNN to the task as an alternative approach to CRFs. The here used RNN implementation is RNNLIB which has support for long short term memory (LSTM) (Graves, 2008). We used a one of k encoding for the input word and also for the labels. After testing several configurations over the validation corpus we used a network with Feature Templates 1-gram source word features x[-4,0], x[-3,0], x[-2,0], x[-1,0] x[0,0], x[1,0], x[2,0], x[3,0], x[4,0] 1-gram source POS features x[-4,1], x[-3,1], x[-2,1], x[-1,1] x[0,1], x[1,1], x[2,1], x[3,1], x[4,1] 2-gram source word features x[-1,0]/x[0,0], x[ 0,0]/x[1,0] x[-1,1]/x[0,1], x[0,1]/x[1,1] 3-gram source word features x[-1,0]/x[0,0]/x[1,0] x[-2,0]/x[-1,0]/x[0,0] x[0,0]/x[1,0]/x[2,0] 3-gram source POS features x[0,1]/x[1,1]/x[2,1] x[-2,1]/x[-1,1]/x[0,1] x[-1,1]/x[0,1]/x[1,1] 4-gram source POS features x[0,1]/x[1,1]/x[2,1]/x[3,1] x[0,1]/x[-1,1]/x[-2,1]/x[-3,1] x[-1,1]/x[0,1]/x[1,1]/x[2,1] x[-2,1]/x[-1,1]/x[0,1]/x[1,1] 5-gram source POS features x[0,1]/x[1,1]/x[2,1]/x[3,1]/x[4,1] x[-4,1]/x[-3,1]/x[-2,1]/x[-1,1]/x[0,1] x[-2,1]/x[-1,1]/x[0,1]/x[1,1]/x[2,1] bigram output label feature x[-1,2]/x[0,2] Table 3: feature templates for CRFs training Words POS Label 基于 P BEGIN-Rmono 这 DT Lmono-Rmono 种 M Lmono-Rmono 看法 NN Lmono-Rmono , PU Lmono-Rmono 本人 PN Lmono-Rmono 是 VC UNALIGN ≪Current label 支持 VV Lmono-Rmono 修正案 NN Lmono-Rmono 的 DEC UNALIGN 。 PU END-Lmono Table 4: feature examples. x[row,col] specifies a token in the input data. row specfies the relative position from the current label and col specifies the absolute position of the column. So for the current lable in this table, x[−1, 2]/x[0, 2] is Lmono-Rmono/UNALIGN and x[−1, 1]/x[0, 1]/x[1, 1] is PN/VC/VV. LSTM 200 nodes in the hidden layer. The RNN has a token error rate of 27.31% and a sentence error rate of 77.00% over the test corpus in Table 2. The RNN is trained on a similar computer as above. RNNLIB utilizes only one thread. The training time is about three and a half days and peak memory consumption is 1G . 5.4 Comparison of CRFs and RNN errors CRFs performs better than RNN (token error rate 25.75% vs 27.31%). Both error rate values are much higher than what we usually see in part-ofspeech tagging task. The main reason is that the “annotated” corpus is converted from word alignment which contains lots of error. However, as we 328 hhhhhhhhhh Reference Prediction Unalign BEGIN-Rm BEGIN-Rr END-Lm END-Lr Lm-Rm Lr-Rm Lm-Rr Lr-Rr Unalign 687724 15084 850 7347 716 493984 107364 43457 9194 BEGIN-Rmono 3537 338315 6209 0 0 0 0 0 0 BEGIN-Rreorder 419 12557 17054 0 0 0 0 0 0 END-Lmono 1799 0 0 365635 3196 0 0 0 0 END-Lreorder 510 0 0 5239 7913 0 0 0 0 Lmomo-Rmono 188627 0 0 0 0 4032738 176682 150952 13114 Lreorder-Rmono 88177 0 0 0 0 369232 433027 27162 15275 Lmomo-Rreorder 32342 0 0 0 0 268570 24558 296033 10645 Lreorder-Rreorder 9865 0 0 0 0 34746 20382 16514 45342 Recall 50.36% 97.20% 56.79% 98.65% 57.92% 88.40% 46.42% 46.83% 35.74% Precision 67.89% 92.45% 70.73% 96.67% 66.92% 77.56% 56.83% 55.42% 48.46% Table 5: CRF Confusion Matrix. Abbreviations: Lmono(Lm) Lreorder(Lr) Rmono(Rm) Rreorder(Rr) hhhhhhhhhh Reference Prediction Unalign BEGIN-Rm BEGIN-Rr END-Lm END-Lr Lm-Rm Lr-Rm Lm-Rr Lr-Rr Unalign 589100 17299 901 7870 1000 639555 82413 24277 3305 BEGIN-Rmono 1978 339686 6397 0 0 0 0 0 0 BEGIN-Rreorder 186 13812 16032 0 0 0 0 0 0 END-Lmono 2258 0 0 364121 4251 0 0 0 0 END-Lreorde 699 0 0 4693 8269 1 0 0 0 Lmomo-Rmono 142777 1 0 0 0 4232113 105266 78692 3264 Lreorder-Rmono 96278 0 1 0 0 491989 323272 14635 6698 Lmomo-Rreorder 31118 0 0 0 0 380483 18144 198068 4335 Lreorder-Rreorder 12366 0 1 0 0 50121 25196 17008 22157 Recall 43.13% 97.59% 53.39% 98.24% 60.53% 92.77% 34.65% 31.33% 17.47% Precision 67.19% 91.61% 68.71% 96.66% 61.16% 73.04% 58.32% 59.54% 55.73% Table 6: RNN Confusion Matrix. Abbreviations: Lmono(Lm) Lreorder(Lr) Rmono(Rm) Rreorder(Rr) will show later, the model trained with both CRFs and RNN help to improve the translation quality. Table 5 and Table 6 demonstrate the confusion matrix of the CRFs and RNN errors over the test corpus. The rows represent the correct tag that the classifier should have predicted and the columns are the actually predicted tags. E.g. the number 687724 in first row and first column of Table 5 tells that there are 687724 correctly labeled Unalign tags. The number 15084 in first row and second column of Table 5 represents that there are 15084 Unalign tags labeled incorrectly to BeginRmono. Therefore, numbers on the diagonal from the upper left to the lower right corner represent the amount of correctly classified tags and all other numbers show the amount of false labels. The many zeros show that both classifier rarely make mistake for the label “BEGIN-∗” which only occur at the beginning of a sentence. The same is true for the “END-∗” labels. 5.5 Translation Results Results are summarized in Table 7. Please read the caption for the meaning of abbreviations. An Index column is added for score reference convenience (B for BLEU; T for TER). For the proposed model, significance testing results on both BLEU and TER are reported (B2 and B3 compared to B1, T2 and T3 compared to T1). We perform bootstrap resampling with bounds estimation as described in (Koehn, 2004). The 95% confidence threshold (denoted by ‡ in the table) is used to draw significance conclusions. We add a column avg. to show the average improvements. From Table 7 we see that the proposed reordering model using CRFs improves the baseline by 0.98 BLEU and 1.21 TER on average, while the proposed reordering model using RNN improves the baseline by 1.32 BLEU and 1.53 TER on average. For line B2 B3 and T2 T3, most scores are better than their corresponding baseline values with more than 95% confidence. The results show that our proposed idea improves the baseline system and RNN trained model performs better than CRFs trained model, in terms of both automatic measure and significance test. To investigate why RNN has lower performance for the tagging task but achieves better BLEU, we build a 3-gram LM on the source side of the training corpus in Table 2 and perplexity values are listed in Table 8. The perplexity of the test corpus for reordering model comparison is much lower than those NIST corpora for translation experiments. In other words, there exists mismatch of the data for reordering model training and actual MT data. This could explain why CRFs is superior to RNN for labeling problem while RNN is better for MT tasks. For the comparative study, the best method is the tree-based jump model (JUMPTREE). Our proposed model ranks the second position. The difference is tiny: on average only 0.08 BLEU (B3 and B10) and 0.15 TER (T3 and T10). Even with 329 Systems NIST02 NIST03 NIST04 NIST05 NIST08 avg. Index BLEU scores baseline 33.60 34.29 35.73 32.15 26.34 B1 baseline+CRFs 34.53 35.19 36.56‡ 33.30‡ 27.41‡ 0.98 B2 baseline+RNN 35.30‡ 35.34‡ 37.03‡ 33.80‡ 27.23‡ 1.32 B3 baseline+LRM 34.87 34.90 36.40 33.43 27.45 0.99 B4 baseline+MERO 34.91 34.83 36.29 33.69 26.66 0.85 B5 baseline+BILM 35.21 35.00 36.83 33.64 27.39 1.19 B6 baseline+SRCLM 34.55 34.52 36.18 32.84 27.03 0.50 B7 baseline+SRL 35.05 34.93 36.71 33.22 26.89 0.93 B8 baseline+SC 34.96 34.52 36.37 33.35 26.90 0.79 B9 baseline+JUMPTREE 35.10 35.53 37.12 34.18 27.19 1.40 B10 baseline+LRM+MERO+BILM+SRCLM+SRL+SC+JUMPTREE 36.77 36.16 38.10 35.67 28.52 2.62 B11 baseline+LRM+MERO+BILM+SRCLM+SRL+SC+JUMPTREE+RNN 36.99 37.00 38.79 35.86 28.99 3.10 B12 TER scores baseline 61.36 60.48 59.12 60.94 65.17 T1 baseline+CRFs 60.14‡ 58.91‡ 57.91‡ 59.77‡ 64.30‡ 1.21 T2 baseline+RNN 59.38‡ 58.87‡ 57.60‡ 59.56‡ 63.99‡ 1.53 T3 baseline+LRM 60.07 59.08 58.42 59.74 64.50 1.05 T4 baseline+MERO 60.19 59.58 58.51 59.49 64.68 0.92 T5 baseline+BILM 60.23 59.93 58.59 60.09 64.72 0.70 T6 baseline+SRCLM 60.27 59.55 58.40 60.16 64.61 0.82 T7 baseline+SRL 60.05 59.55 58.14 59.69 64.74 0.98 T8 baseline+SC 59.90 59.37 58.27 59.69 64.44 1.08 T9 baseline+JUMPTREE 59.53 58.54 57.67 58.90 64.04 1.68 T10 baseline+LRM+MERO+BILM+SRCLM+SRL+SC+JUMPTREE 59.16 57.84 56.83 58.03 63.20 2.40 T11 baseline+LRM+MERO+BILM+SRCLM+SRL+SC+JUMPTREE+RNN 58.67 57.67 56.27 58.00 63.09 2.67 T12 Table 7: Experimental results. CRFs and RNN mean the tagging-style model trained with CRFs or RNN; LRM for lexicalized reordering model (Koehn et al., 2007) ; MERO for maximum entropy reordering model (Zens and Ney, 2006) ; BILM for bilingual language model (Mari˜no et al., 2006) and SRCLM for its simpler version source decoding sequence model (Feng et al., 2010) ; SC for syntactic cohesion model (Cherry, 2008) ; SRL for semantic cohesion model (Feng et al., 2012); JUMPTREE for our tree-based jump model based on (Wang et al., 2007). Running Words OOV Perplexity Test in Table 2 8 382 086 33854 74.364 NIST02 22 749 195 176.806 NIST03 24 180 290 274.679 NIST04 49 612 320 170.507 NIST05 29 966 228 279.402 NIST08 32 502 511 408.067 Table 8: perplexity a strong system (B11 and T11), our model is still able to provide improvements (B12 and T12). 6 Conclusion In this paper, a novel tagging style reordering model has been proposed. By our method, the reordering problem is converted into a sequence labeling problem so that the whole source sentence is taken into consideration for reordering decision. By adding an unaligned word tag, the unaligned word phenomenon is automatically implanted in the proposed model. The model is utilized as soft constraints in the decoder. In practice, we do not experience decoding memory increase nor speed slow down. We choose CRFs and RNN to accomplish the sequence labeling task. The CRFs achieves lower error rate on the tagging task but RNN trained model is better for the translation task. Experimental results show that our model is stable and improves the baseline system by 0.98 BLEU and 1.21 TER (trained by CRFs) and 1.32 BLEU and 1.53 TER (trained by RNN). Most of the scores are better than their corresponding baseline values with more than 95% confidence. We also compare our method with several other popular reordering models. Our model ranks the second position which is slightly worse than the tree-based jump model. However, the tree-based jump model relies on manually designed reordering rules which does not exist for many language pairs while our model can be easily adapted to other translation tasks. We also show that the proposed model is able to improve a very strong baseline system. The main contributions of the paper are: propose the tagging-style reordering model and improve the translation quality; compare two sequence labeling techniques CRFs and RNN; compare our method with seven other reordering models. To our best knowledge, it is the first time that the above two comparisons have been reported . Acknowledgments This work was partly realized as part of the Quaero Programme, funded by OSEO, French State agency for innovation, and also partly based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR0011-12-C-0015. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the Defense Advanced Research Projects Agency (DARPA). 330 References Yoshua Bengio, Patrice Simard, and Paolo Frasconi. 1994. Learning long-term dependencies with gradient descent is difficult. IEEE Transactions on Neural Networks, 5(2):157–166, March. Adam Berger, Peter F. Brown, Stephen A. Pietra, Vincent J. Pietra, Andrew S. Kehler, and Robert L. Mercer. 1996. Language translation apparatus and method of using Context-Based translation models. United States Patent, No. 5,510,981. Colin Cherry. 2008. Cohesive phrase-based decoding for statistical machine translation. In Proceedings of Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL: HLT), pages 72–80, Columbus, Ohio, USA, June. Association for Computational Linguistics. Jeffrey L. Elman. 1990. Finding structure in time. Cognitive Science, 14(2):179–211. Minwei Feng, Arne Mauser, and Hermann Ney. 2010. A source-side decoding sequence model for statistical machine translation. In Proceedings of the Association for Machine Translation in the Americas (AMTA), Denver, CO, USA, October. Minwei Feng, Weiwei Sun, and Hermann Ney. 2012. Semantic cohesion model for phrase-based SMT. In Proceedings of the International Conference on Computational Linguistics (COLING), pages 867– 878, Mumbai, India, December. The COLING 2012 Organizing Committee. Michel Galley and Christopher D. Manning. 2008. A simple and effective hierarchical phrase reordering model. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 848–856, Stroudsburg, PA, USA. Association for Computational Linguistics. Alex Graves. 2008. Supervised Sequence Labelling with Recurrent Neural Networks. Ph.D. thesis, Technical University of Munich, July. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780, November. Michael I. Jordan. 1990. Attractor dynamics and parallelism in a connectionist sequential machine. IEEE Computer Society Neural Networks Technology Series, pages 112–127. Kevin Knight. 1999. Decoding complexity in wordreplacement translation models. Computational Linguistics, 25(4):607–615. Philipp Koehn, Franz J. Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology (NAACL) - Volume 1, pages 48–54, Stroudsburg, PA, USA. Association for Computational Linguistics. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL), demonstration session, pages 177–180, Prague, Czech Republic, June. Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 388–395, Barcelona, Spain, July. Association for Computational Linguistics. John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning (ICML), pages 282–289, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. Kevin J. Lang, Alex H. Waibel, and Geoffrey E. Hinton. 1990. A time-delay neural network architecture for isolated word recognition. Neural networks, 3(1):23–43, January. Thomas Lavergne, Olivier Capp´e, and Franc¸ois Yvon. 2010. Practical very large scale CRFs. In Proceedings the 48th Annual Meeting of the Association for Computational Linguistics (ACL), pages 504–513. Association for Computational Linguistics, July. Dong C. Liu and Jorge Nocedal. 1989. On the limited memory bfgs method for large scale optimization. Mathematical Programming, 45(3):503–528, December. Jos´e B. Mari˜no, Rafael E. Banchs, Josep Maria Crego, Adri`a de Gispert, Patrik Lambert, Jos´e A. R. Fonollosa, and Marta R. Costa-Juss`a. 2006. N-grambased machine translation. Computational Linguistics, 32(4):527–549, December. Franz J. Och and Hermann Ney. 2002. Discriminative training and maximum entropy models for statistical machine translation. In Proceedings of Annual Meeting of the Association for Computational Linguistics (ACL), pages 295–302, Philadelphia, Pennsylvania, USA, July. Franz J. Och, Christoph Tillmann, and Hermann Ney. 1999. Improved alignment models for statistical machine translation. In Proceedings of the Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora (EMNLP), pages 20–28, University of Maryland, College Park, MD, USA, June. Association for Computational Linguistics. 331 Franz J. Och, Daniel Gildea, Sanjeev Khudanpur, Anoop Sarkar, Kenji Yamada, Alex Fraser, Shankar Kumar, Libin Shen, David Smith, Katherine Eng, Viren Jain, Zhen Jin, and Dragomir Radev. 2004. A smorgasbord of features for statistical machine translation. In Proceedings of the Conference on Statistical Machine Translation at the North American Chapter of the Association for Computational Linguistics on Human Language Technology (NAACL-HLT), pages 161–168, Boston, Massachusetts, USA, May. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2001. Bleu: a method for automatic evaluation of machine translation. IBM Research Report, RC22176 (W0109-022), September. David. E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. 1986. Learning internal representations by error propagation. In David E. Rumelhart and James L. McClelland, editors, Parallel distributed processing: explorations in the microstructure of cognition, vol. 1, pages 318–362. MIT Press, Cambridge, MA, USA. Mike Schuster and Kuldip K. Paliwal. 1997. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11):2673–2681, November. Matthew Snover, Bonnie Dorr, Richard Schwartz, John Makhoul, Linnea Micciulla, and Ralph Weischedel. 2005. A study of translation error rate with targeted human annotation. Technical Report LAMP-TR126, CS-TR-4755, UMIACS-TR-2005-58, University of Maryland, College Park, MD. Andreas Stolcke. 2002. Srilm - an extensible language modeling toolkit. In Proceedings of International Conference on Spoken Language Processing (ICSLP), pages 901–904, Denver, Colorado, USA, September. ISCA. Charles Sutton and Andrew Mccallum, 2006. Introduction to Conditional Random Fields for Relational Learning, pages 93–128. MIT Press. Chao Wang, Michael Collins, and Philipp Koehn. 2007. Chinese syntactic reordering for statistical machine translation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 737–745, Prague, Czech Republic, June. Association for Computational Linguistics. Dekai Wu. 1995. Stochastic inversion transduction grammars with application to segmentation, bracketing, and alignment of parallel corpora. In Proceedings of the 14th international joint conference on Artificial intelligence (IJCAI) - Volume 2, pages 1328–1335, San Francisco, CA, USA, August. Morgan Kaufmann Publishers Inc. Richard Zens and Hermann Ney. 2003. A comparative study on reordering constraints in statistical machine translation. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics (ACL) - Volume 1, pages 144–151, Stroudsburg, PA, USA. Association for Computational Linguistics. Richard Zens and Hermann Ney. 2006. Discriminative reordering models for statistical machine translation. In Proceedings of the Workshop on Statistical Machine Translation at the North American Chapter of the Association for Computational Linguistics on Human Language Technology (NAACL-HLT), pages 55–63, New York City, NY, June. Association for Computational Linguistics. Richard Zens, Franz J. Och, and Hermann Ney. 2002. Phrase-based statistical machine translation. In German Conference on Artificial Intelligence, pages 18– 32. Springer Verlag, September. Yuqi Zhang, Richard Zens, and Hermann Ney. 2007. Chunk-level reordering of source language sentences with automatically learned rules for statistical machine translation. In Proceedings of the Workshop on Syntax and Structure in Statistical Translation at the North American Chapter of the Association for Computational Linguistics on Human Language Technology (NAACL-HLT)/Association for Machine Translation in the Americas (AMTA), pages 1–8, Morristown, NJ, USA, April. Association for Computational Linguistics. Yuqi Zhang. 2013. The Application of Source Language Information in Chinese-English Statistical Machine Translation. Ph.D. thesis, Computer Science Department, RWTH Aachen University, May. 332
2013
32
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 333–342, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics A Markov Model of Machine Translation using Non-parametric Bayesian Inference Yang Feng and Trevor Cohn Department of Computer Science The University of Sheffield Sheffield, United Kingdom [email protected] and [email protected] Abstract Most modern machine translation systems use phrase pairs as translation units, allowing for accurate modelling of phraseinternal translation and reordering. However phrase-based approaches are much less able to model sentence level effects between different phrase-pairs. We propose a new model to address this imbalance, based on a word-based Markov model of translation which generates target translations left-to-right. Our model encodes word and phrase level phenomena by conditioning translation decisions on previous decisions and uses a hierarchical Pitman-Yor Process prior to provide dynamic adaptive smoothing. This mechanism implicitly supports not only traditional phrase pairs, but also gapping phrases which are non-consecutive in the source. Our experiments on Chinese to English and Arabic to English translation show consistent improvements over competitive baselines, of up to +3.4 BLEU. 1 Introduction Recent years have witnessed burgeoning development of statistical machine translation research, notably phrase-based (Koehn et al., 2003) and syntax-based approaches (Chiang, 2005; Galley et al., 2006; Liu et al., 2006). These approaches model sentence translation as a sequence of simple translation decisions, such as the application of a phrase translation in phrase-based methods or a grammar rule in syntax-based approaches. In order to simplify modelling, most MT models make an independence assumption, stating that the translation decisions in a derivation are independent of one another. This conflicts with the intuition behind phrase-based MT, namely that translation decisions should be dependent on context. On one hand, the use of phrases can memorize local context and hence helps to generate better translation compared to word-based models (Brown et al., 1993; Och and Ney, 2003). On the other hand, this mechanism requires each phrase to be matched strictly and to be used as a whole, which precludes the use of discontinuous phrases and leads to poor generalisation to unseen data (where large phrases tend not to match). In this paper we propose a new model to drop the independence assumption, by instead modelling correlations between translation decisions, which we use to induce translation derivations from aligned sentences (akin to word alignment). We develop a Markov model over translation decisions, in which each decision is conditioned on previous n most recent decisions. Our approach employs a sophisticated Bayesian non-parametric prior, namely the hierarchical Pitman-Yor Process (Teh, 2006; Teh et al., 2006) to represent backoff from larger to smaller contexts. As a result, we need only use very simple translation units – primarily single words, but can still describe complex multi-word units through correlations between their component translation decisions. We further decompose the process of generating each target word into component factors: finishing the translating, jumping elsewhere in the source, emitting a target word and deciding the fertility of the source words. Overall our model has the following features: 1. enabling model parameters to be shared between similar translation decisions, thereby obtaining more reliable statistics and generalizing better from small training sets. 2. learning a much richer set of translation fragments, such as gapping phrases, e.g., the translation for the German werde ...ankommen in English is will arrive .... 3. providing a unifying framework spanning word-based and phrase-based model of translation, while incorporating explicit transla333 tion, insertion, deletion and reordering components. We demonstrate our model on Chinese-English and Arabic-English translation datasets. The model produces uniformly better translations than those of a competitive phrase-based baseline, amounting to an improvement of up to 3.4 BLEU points absolute. 2 Related Work Word based models have a long history in machine translation, starting with the venerable IBM translation models (Brown et al., 1993) and the hidden Markov model (Vogel et al., 1996). These models are still in wide-spread use today, albeit only as a preprocessing step for inferring word level alignments from sentence-aligned parallel corpora. They combine a number of factors, including distortion and fertility, which have been shown to improve word-alignment and translation performance over simpler models. Our approach is similar to these works, as we also develop a word-based model, and explicitly consider similar translation decisions, alignment jumps and fertility. We extend these works in two important respects: 1) while they assume a simple parameterisation by making iid assumptions about each translation factor, we instead allow for rich correlations by modelling sequences of translation decisions; and 2) we develop our model in the Bayesian framework, using a hierarchical PitmanYor Process prior with rich backoff semantics between high and lower order sequences of translation decisions. Together this results in a model with rich expressiveness but can still generalize well to unseen data. More recently, a number of authors have proposed Markov models for machine translation. Vaswani et al. (2011) propose a rule Markov model for a tree-to-string model which models correlations between pairs of mininal rules, and use Kneser-Ney smoothing to alleviate the problems of data sparsity. Similarly, Crego et al. (2011) develop a bilingual language model which incorporates words in the source and target languages to predict the next unit, which they use as a feature in a translation system. This line of work was extended by Le et al. (2012) who develop a novel estimation algorithm based around discriminative projection into continuous spaces. Also relevant is Durrani et al. (2011), who present a sequence model of translation including reordering. Our work also uses bilingual information, using the source words as part of the conditioning context. In contrast to these approaches which primarily address the decoding problem, we focus on the learning problem of inferring alignments from parallel sentences. Additionally, we develop a full generative model using a Bayesian prior, and incorporate additional factors besides lexical items, namely jumps in the source and word fertility. Another aspect of this paper is the implicit support for phrase-pairs that are discontinous in the source language. This idea has been developed explicitly in a number of previous approaches, in grammar based (Chiang, 2005) and phrase-based systems (Galley and Manning, 2010). The latter is most similar to this paper, and shows that discontinuous phrases compliment standard contiguous phrases, improving expressiveness and translation performance. Unlike their work, here we develop a complimentary approach by constructing a generative model which can induce these rich rules directly from sentence-aligned corpora. 3 Model Given a source sentence, our model infers a latent derivation which produces a target translation and meanwhile gives a word alignment between the source and the target. We consider a process in which the target string is generated using a left-to-right order, similar to the decoding strategy used by phrase-based machine translation systems (Koehn et al., 2003). During this process we maintain a position in the source sentence, which can jump around to allow for different sentence ordering in the target vs. source languages. In contrast to phrase-based models, we use words as our basic translation unit, rather than multi-word phrases. Furthermore, we decompose the decisions involved in generating each target word to a number of separate factors, where each factor is modelled separately and conditioned on a rich history of recent translation decisions. 3.1 Markov Translation Our model generates target translation left-toright word by word. The generative process employs the following recursive procedure to construct the target sentence conditioned on the source: i ←1 while Not finished do Decide whether to finish the translation, ξi 334 Step Source sentence Translation finish jump emission 0 Je le prends 1 Je le prends I no monotone Je →I 2 Je le prends I ’ll no insert null →’ll 3 Je le prends I ’ll take no forward prends →take 4 Je le prends I ’ll take that no backward le →that 5 Je le prends I ’ll take that one no stay le →one 6 Je le prends I ’ll take that one yes Figure 1: Translation agenda of Je le prends →I ’ll take that one. if ξi = false then Select a source word to jump to Emit a target word for the source word end if i ←i + 1 end while In the generation of each target word, our model includes three separate factors: the binary finish decision, a jump decision to move to a different source word, and emission which translates or otherwise inserts a word in the target string. This generative process resembles the sequence of translation decisions considered by a standard MT decoder (Koehn et al., 2003), but note that our approach differs in that there is no constraint that all words are translated exactly once. Instead source words can be skipped or repeatedly translated. This makes the approach more suitable for learning alignments, e.g., to account for word fertilities (see §3.3), while also permitting inference using Gibbs sampling (§4). More formally, we can express our probabilistic model as pbs(eI 1, aI 1|fJ 1 ) = I+1 Y i=1 p(ξi|fi−1 ai−n, ei−1 i−n) × IY i=1 p(τi|fi−1 ai−n, ei−1 i−n) × IY i=1 p(ei|τi, fi ai−n, ei−1 i−n) (1) where ξi is the finish decision for target position i, τi is the jump decision to source word fai and fi ai−n is the source words for target positions i −n, i −n + 1, ..., i. Each of the three distributions (finish, jump and emission) is drawn respective from hierarchical Pitman-Yor Process priors, as described in Section 3.2. The jump decision τi in Equation 1 demands further explanation. Instead of modelling jump distances explicitly, which poses problems for generalizing between different lengths of sentences and general parameter explosion, we consider a small handful of types of jump based on the distance between the current source word ai and the previous source word ai−1, i.e., di = ai −ai−1.1 We bin jumps into five types: a) insert; b) backward, if di < 0; c) stay, if di = 0; d) monotone, if di = 1; e) forward, if di > 1. The special jump type insert handles null alignments, denoted ai = 0 which licence spurious insertions in the target string. To illustrate this translation process, Figure 1 shows the example translation <Je le prends, I ’ll take that one>. Initially we set the source position before the first source word Je. Then in step 1, we decide not to finish (finish=no), jump to source word Je and translate it as I. Next, we again decide not to finish, jump to the null source word and insert ’ll. The process continues until in step 6 we elect to finish (finish=yes), at which point the translation is complete, with target string I ’ll take that one. 3.2 Hierarchical Pitman-Yor Process The Markov assumption limits the context of each distribution to the n most recent translation decisions, which limits the number of model parameters. However for any non-trivial value n > 0, overfitting is a serious concern. We counter the problem of a large parameter space using a Bayesian non-parametric prior, namely the hierarchical Pitman-Yor Process (PYP). The PYP describes distributions over possibly infinite event spaces that follow a power law, with few events taking the majority of the probability mass and a long tail of less frequent events. We consider a hierarchical PYP, where a sequence of chained PYP 1For a target position aligned to null, we denote its source word as null and set its aligned source position as that of the previous target word that is aligned to non-null. 335 priors allow backoff from larger to smaller contexts such that our model can learn rich contextual models for known (large) contexts while also still being able to generalize well to unseen contexts (using smaller histories). 3.2.1 Pitman-Yor Process A PYP (Pitman and Yor, 1997) is defined by its discount parameter 0 ≤a < 1, strength parameter b > −a and base distribution G0. For a distribution drawn from a PYP, G ∼PYP(a, b, G0), marginalising out G leads to a simple distribution which can be described using a variant of the Chinese Restaurant Process (CRP). In this analogy we imagine a restaurant has an infinite number of tables and each table can accommodate an infinite number of customers. Each customer (a sample from G) walks in one at a time and seats themselves at a table. Finally each table is served a communal dish (a draw from G0), which is served to each customer seated at the table. The assignment of customers to tables is such that popular tables are more likely to be chosen, and this richget-richer dynamic produces power-law distributions with few events (the dishes at popular tables) dominating the distribution. More formally, at time n a customer enters and selects a table k which is either a table having been seated (1 ≤k ≤K−) or an empty table (k = K−+ 1) by p(tn = k|t−n) = ( c− tk−a n−1+b 1 ≤k ≤K− aK−+b n−1+b k = K−+ 1 where tn is the table selected by the customer n, t−n is the seating arrangement of previous n −1 customers, c− tk is the number of customers seated at table k in t−n and K−= K(t−n) is the number of tables in t−n. If the customer sits at an empty table, a dish h is served to his table by the probability of G0(h), otherwise, he can only share with others the dish having been served to his table.2 Overall, the probability of the customer being served a dish h is p(on = h|t−n, o−n) = c− oh −aK− h n −1 + b + (aK−+ b) n −1 + b G0(h) where on is the dish served to the customer n, o−n is the dish accommodation of previous n −1 customers, c− oh is the number of customers who are 2We also say the customer is served with this dish. served with the dish h in o−n and K− h is the number of tables served with the dish h in t−n. The hierarchical PYP (hPYP; Teh (2006)) is an extension of the PYP in which the base distribution G0 is itself a PYP distribution. This parent (base) distribution can itself have a PYP as a base distribution, giving rise to hierarchies of arbitrary depth. Like the PYP, inference under the hPYP can be also described in terms of CRP whereby each table in one restaurant corresponds to a dish in the next deeper level, and is said to share the same dish. Whenever an empty table is seated in one level, a customer must enter the restaurant in the next deeper level and find a table to sit. This process continues until the customer is assigned a shared table or the deepest level of the hierarchy is reached. A similar process occurs when a customer leaves, where newly emptied tables must be propagated up the hierarchy in the form of departing customers. There is not space for a complete treatment of the hPYP and the particulars of inference; we refer the interested reader to Teh (2006). 3.2.2 A Hierarchical PYP Translation Model We draw the distributions for the various translation factors from respective hierarchical PYP priors, as shown in Figure 2 for the finish, jump and emission factors. For the emission factor (Figure 2c), we draw the target word ei from a distribution conditioned on the last two source and target words, as well as the current source word, fai and the current jump type τi. Here the draw of a target word corresponds to a customer entering and which target word to emit corresponds to which dish to be served to the customer in the CRP. The hierarchical prior encodes a backoff path in which the jump type is dropped first, followed by pairs of source and target words from least recent to most recent. The final backoff stages drop the current source word, terminating with the uniform base distribution over the target vocabulary V . The distributions over the other two factors in Figure 2 follow a similar pattern. Note however that these distributions don’t condition on the current source word, and consequently have fewer levels of backoff. The terminating base distribution for the finish factor is a uniform distribution with equal probability for finishing versus continuing. The jump factor has an additional conditioning variable t which encodes whether the previous alignment is near the start or end of the source sentence. This information affects which of the jump values are legal from the current position, such 336 ξi|fi−1 ai−2, ei−1 i−2 ∼Gξ fi−1 ai−2,ei−1 i−2 Gξ fi−1 ai−2,ei−1 i−2 ∼PYP(aξ 3, bξ 3, Gξ fai−1,ei−1) Gξ fai−1,ei−1 ∼PYP(aξ 2, bξ 2, Gξ) Gξ ∼PYP(aξ 1, bξ 1, Gξ 0) Gξ 0 ∼U(1 2) (a) Finish factor τi|fi−1 ai−2, ei−1 i−2, t ∼Gτ fi−1 ai−2,ei−1 i−2,t Gτ fi−1 ai−2,ei−1 i−2,t ∼PYP(aτ 3, bτ 3, Gτ fai−1,ei−1,t) Gτ fai−1,ei−1,t ∼PYP(aτ 2, bτ 2, Gτ t ) Gτ t ∼PYP(aτ 1, bτ 1, Gτ 0,t) Gτ 0,t ∼U (b) Jump factor ei|τi, fi ai−2, ei−1 i−2 ∼Ge τi,fi ai−2,ei−1 i−2 Ge τi,fi ai−2,ei−1 i−2 ∼PYP(ae 5, be 5, Ge fi ai−2,ei−1 i−2) Ge fi ai−2,ei−1 i−2 ∼PYP(ae 4, be 4, Ge fi ai−1,ei−1) Ge fi ai−1,ei−1 ∼PYP(ae 3, be 3, Ge fai) Ge fai ∼PYP(ae 2, be 2, Ge) Ge ∼PYP(ae 1, be 1, Ge 0) Ge 0 ∼U( 1 |V |) (c) Emission factor Figure 2: Distributions over the translation factors and their hierarchical priors. that a jump could not go outside the bounds of the source sentence. Accordingly we maintain separate distributions for each setting, and each has a different uniform base distribution parameterized according to the number of possible jump types. 3.3 Fertility For each target position, our Markov model may select a source word which has been covered, which means a source word may be linked to several target positions. Therefore, we introduce fertility to denote the number of target positions a source word is linked to in a sentence pair. Brown et al. (1993) have demonstrated the usefulness of fertility in probability estimation: IBM models 3– 5 exhibit large improvements over models 1–2. On these grounds, we include fertility to produce our advanced model, pad(eI 1, aI 1|fJ 1 )=pbs(eI 1, aI 1|fJ 1 ) J Y j=1 p(φj|fj j−n) (2) where φj is the fertility of source word fj in the sentence pair < fJ 1 , eI 1 > and pbs is the basic model defined in Eq. 1. In order to avoid problems of data sparsity, we bin fertility into three types, a) zero, if φ = 0; b) single, if φ = 1; and c) multiple, if φ > 1. We draw the fertility variables from a hierarchical PYP distribution, using three levels of backoff, φj|fj j−1 ∼Gφ fj j−1 Gφ fj j−1 ∼PYP(aφ 3, bφ 3, Gφ fj) Gφ fj ∼PYP(aφ 2, bφ 2, Gφ) Gφ ∼PYP(aφ 1, bφ 1, Gφ 0) Gφ 0 ∼U(1 3) where we condition the fertility of each word token on the token to its left, which we drop during the first stage of backoff to simple word-based fertility. The last level of backoff further generalises to a shared fertility across all words. In this way we gain the benefits of local context on fertility, while including more general levels to allow wider applicability. 4 Gibbs Sampling To train the model, we use Gibbs sampling, a Markov Chain Monte Carlo (MCMC) technique for posterior inference. Specifically we seek to infer the latent sequence of translation decisions given a corpus of sentence pairs. Given the structure of our model, a word alignment uniquely specifies the translation decisions and the sequence follows the order of the target sentence left to right. Our Gibbs sampler operates by sampling an update to the alignment of each target word in the corpus. It visits each sentence pair in the corpus in a random order and resamples the alignments for each target position as follows. First we discard the alignment to the current target word and decrement the counts of all factors affected by this alignment in their top level distributions (which will percolate down to the lower restaurants). Next we calculate posterior probabilities for all possible alignment to this target word based on the table occupancies in the hPYP. Finally we draw an alignment and increment the table counts for the translation decisions affected by the new alignment. More specifically, we consider sampling from Equation 2 with n = 2. When changing the alignment to a target word ei from j′ to j, the finish, jump and emission for three target positions i, i + 1, i + 2 and fertility for two source positions j, j′ may be affected. This leads to the following 337 decrement increment ξ(no | null, ’ll, Je, I) ξ(no | null, ’ll, Je, I) ξ(no | p..s, take, null, ’ll) ξ(no | Je, take, null, ’ll) ξ(no | le, that, p..s, take) ξ(no | le, that, Je, take) τ(f | null, ’ll, Je, I) τ(s| null, ’ll, Je, I) τ(b | p..s, take, null, ’ll) τ(m| Je, take, null, ’ll) τ(s | le, that, p..s, take) τ(s| le, that, Je, take) e(take |f, p..s, null, ’ll, Je, I) e(take |s, Je, null, ’ll, Je, I) e(that |b, le, p..s, take, null, ’ll) e(that |m, le, Je, take, null, ’ll) e(one |s, le, le, that, p..s, take) e(one |s, le, le, that, Je, take) φ(single | p..s, le) φ(multiple | Je, <s>) Table 1: The count update when changing the aligned source word of take from prends to Je in Figure 1. Key: f–forward s–stay b–backward m– monotone p..s–prends. posterior probability p(ai = j|t−i, o−i) ∝ i+2 Y l=i p(ξl)p(τl)p(el) × p(φj + 1)p(φj′ −1) p(φj)p(φj′) (3) where φj, φj′ are the fertilities before changing the link and for brevity we omit the conditioning contexts. For example, in Figure 1, we sample for target word take and change the aligned source word from prends to Je, then the items for which we need to decrement and increment the counts by one are shown in Table 1 and the posterior probability corresponding to the new alignment is the product of the hierarchical PYP probabilities of all increment items divided by the probability of the fertility of prends being single. Maintaining the current state of the hPYP as events are incremented and decremented is nontrivial and the naive approach requires significant book-keeping and has poor runtime behaviour. For this we adopt the approach of Blunsom et al. (2009b), who present a method for maintaining table counts without needing to record the table assignments for each translation decision. Briefly, this algorithm samples the table assignment during the increment and decrement operations, which is then used to maintain aggregate table statistics. This can be done efficiently and without the need for explicit table assignment tracking. 4.1 Hyperparameter Inference In our model, we treat all hyper-parameters {(ax, bx), x ∈(ξ, τ, e, φ)} as latent random variables rather than fixed parameters. This means our model is parameter free, and requires no user intervention when adapting to different data sets. For the discount parameter, we employ a uniform Beta distribution ax ∼Beta(1, 1) while for the strength parameter, we employ a vague Gamma distribution bx ∼Gamma(10, 0.1). All restaurants in the same level share the same hyper-prior and the hyper-parameters for all levels are resampled using slice sampling (Johnson and Goldwater, 2009) every 10 iterations. 4.2 Parallel Implementation As mentioned above, the hierarchical PYP takes into consideration a rich history to evaluate the probabilities of translation decisions. But this leads to difficulties when applying the model to large data sets, particularly in terms of tracking the table and customer counts. We apply the technique from Blunsom et al. (2009a) of using multiple processors to perform approximate Gibbs sampling which they showed achieved equivalent performance to the exact Gibbs sampler. Each process performs sampling on a subset of the corpus using local counts, and communicates changes to these counts after each full iteration. All the count deltas are then aggregated by each process to refresh the counts at the end of each iteration. In this way each process uses slightly “out-of-date” counts, but can process the data independently of the other processes. We found that this approximation improved the runtime significantly with no noticeable effect on accuracy. 5 Experiments In principle our model could be directly used as a MT decoder or as a feature in a decoder. However in this paper we limit our focus to inducing word alignments, i.e., by using the model to infer alignments which are then used in a standard phrasebased translation pipeline. We leave full decoding for later work, which we anticipate would further improve performance by exploiting gapping phrases and other phenomena that implicitly form part of our model but are not represented in the phrase-based decoder. Decoding under our model would be straight-forward in principle, as the generative process was designed to closely parallel the search procedure in the phrase-based model.3 Three data sets were used in the experiments: two Chinese to English data sets on small (IWSLT) and larger corpora (FBIS), and Arabic 3However the reverse translation probability would be intractable, as this does not decompose following a left-to-right generation order in the target language. 338 to English translation. Our experiments seek to test how the model compares to a GIZA++ baseline, quantifies the effect of each factor in the probabilistic model (i.e., jump, fertility), and the effect of different initialisations of the sampler. We present results on translation quality and word alignment. 5.1 Data Setup The Markov order of our model in all experiments was set to n = 2, as shown in Equation 2. For each data set, Gibbs sampling was performed on the training set in each direction (source-to-target and target-to-source), initialized using GIZA++.4 We used the grow heuristic to combine the GIZA++ alignments in both directions (Koehn et al., 2003), which we then intersect with the predictions of GIZA++ in the relevant translation direction. This initialisation setup gave the best results (we compare other initialisations in §5.2). The two Gibbs samplers were “burned in” for the first 1000 iterations, after which we ran a further 500 iterations selecting every 50th sample. A phrase table was constructed using these 10 sets of multiple alignments after combining each pair of directional alignments using the grow-diag-final heuristic. Using multiple samples in this way constitutes Monte Carlo averaging, which provides a better estimate of uncertainty cf. using a single sample.5 The alignment used for the baseline results was produced by combining bidirectional GIZA++ alignments using the grow-diag-final heuristic. We used the Moses machine translation decoder (Koehn et al., 2007), using the default features and decoding settings. We compared the performance of Moses using the alignment produced by our model and the baseline alignment, evaluating translation quality using BLEU (Papineni et al., 2002) with case-insensitive n-gram matching with n = 4. We used minimum error rate training (Och, 2003) to tune the feature weights to maximise the BLEU score on the development set. 5.2 IWSLT Corpus The first experiments are on the IWSLT data set for Chinese-English translation. The training data consists of 44k sentences from the tourism and travel domain. For the development set we use both ASR devset 1 and 2 from IWSLT 2005, and 4All GIZA++ alignments used in our experiments were produced by IBM model4. 5The effect on translation scores is modest, roughly amounting to +0.2 BLEU versus using a single sample. System Dev IWSLT05 baseline 45.78 49.98 Markov+fs+e 49.13 51.54 Markov+fs+e+j 49.68 52.55 Markov+fs+e+j+ft 51.32 53.41 Table 2: Impact of adding factors to our Markov model, showing BLEU scores on IWSLT. Key: fs– finish e–emission j–jump ft–fertility. for the test set we use the IWSLT 2005 test set. The language model is a 3-gram language model trained using the SRILM toolkit (Stolcke, 2002) on the English side of the training data. Because the data set is small, we performed Gibbs sampling on a single processor. First we check the effect of the model factors jump and fertility. Both emission and finish factors are indispensable to the generative translation process, and consequently these two factors are included in all runs. Table 2 shows translation result for various models, including a baseline and our Markov model with different combinations of factors. Note that even the simplest Markov model far outperforms the GIZA++ baseline (+1.5 BLEU) despite the baseline (IBM model 4) including a number of advanced features (e.g., jump, fertility) that are not present in the basic Markov model. This improvement is a result of the Markov model making use of rich bilingual contextual information coupled with sophisticated backoff, as opposed to GIZA++ which considers much more local events, with nothing larger than word-class bigrams. Our model shows large improvements as the extra factors are included. Jump yields an improvement of +1 BLEU by capturing consistent reordering patterns. Adding fertility results in a further +1 BLEU point improvement. Like the IBM models, our approach allows each source word to produce any number of target words. This capacity allows for many non-sensical alignments such as dropping many source words, or aligning single source words to several target words. Explicitly modelling fertility allows for more consistent alignments, especially for special words such as punctuation which usually have a fertility of one. Next we check the stability of our model with different initialisations. We compare different combination techniques for merging the GIZA++ alignments: grow-diag-final (denoted as gdf), intersection and grow. Table 3 shows that the different initialisations have only a small effect on 339 system gdf intersection grow baseline 49.98 48.44 50.11 our model 52.96 52.79 53.41 Table 3: Machine translation performance in BLEU % on the IWSLT 2005 Chinese-English test set. The Gibbs samplers were initialized with three different alignments, shown as columns. the results of our model. While the baseline results vary by up to 1.7 BLEU points for the different alignments, our Markov model provided more stable results with the biggest difference of 0.6. Among the three initialisations, we get the best result with the initialisation of grow. Gdf often introduces alignment links involving function words which should instead be aligned to null. Intersection includes many fewer alignments, typically only between content words, and the sparsity means that words can only have a fertility of either 0 or 1. This leads to the initialisation being a strong mode which is difficult to escape from during sampling. Despite this problem, it has only a mild negative effect on the performance of our model, which is probably due to improvements in the alignments for words that truly should be dropped or aligned only to one word. Grow provides a good compromise between gdf and intersection, and we use this initialisation in all our subsequent experiments. Figure 3 shows an example comparing alignments produced by our model and the GIZA++ baseline, in both cases after combining the two directional models. Note that GIZA++ has linked many function words which should be left unaligned, by using rare English terms as garbage collectors. Consequently this only allows for the extraction of few large phrase-pairs (e.g. <在 找, ’m looking for>) and prevents the extraction of some good phrases (e.g. <烧烤类型的, grill-type>, for “家” and “点的” are wrongly aligned to “grill-type”). In contrast, our model better aligns the function words, such that many more useful phrase pairs can be extracted, i.e., <在, ’m>, <找, looking for>, <烧烤类型, grilltype> and their combinations with neighbouring phrase pairs. 5.3 FBIS Corpus Theoretically, Bayesian models should outperform maximum likelihood approaches on small data sets, due to their improved modelling of un(a) GIZA++ baseline 我 在 找 一 家 好 点 的 , 安静 的 烧烤 类型 的 餐馆 。 i 'm looking for a nice , quiet grill-type restaurant . (b) our model Figure 3: Comparison of an alignment inferred by the baseline vs. our approach. certainty. For larger datasets, however, the difference between the two techniques should narrow. Hence one might expect that upon moving to larger translation datasets our gains might evaporate. This chain of reasoning ignores the fact that our model is considerably richer than the baseline IBM models, in that we model rich contextual correlations between translation decisions, and consequently our approach has a lower inductive bias. For this reason our model should continue to improve with more data, by inferring better estimates of translation decision n-grams. A caveat though is that inference by sampling becomes less efficient on larger data sets due to stronger modes, requiring more iterations for convergence. To test whether our improvements carry over to larger datasets, we assess the performance of our model on the FBIS Chinese-English data set. Here the training data consists of the non-UN portions and non-HK Hansards portions of the NIST training corpora distributed by the LDC, totalling 303k sentence pairs with 8m and 9.4m words of Chinese and English, respectively. For the development set we use the NIST 2002 test set, and evaluate performance on the test sets from NIST 2003 340 NIST02 NIST03 NIST05 baseline 33.31 30.09 29.01 our model 33.83 31.02 30.23 Table 4: Translation performance on Chinese to English translation, showing BLEU% for models trained on the FBIS data set. and 2005. The language model is a 3-gram LM trained on Xinhua portion of the Gigaword corpus using the SRILM toolkit with modified KneserNey smoothing. As the FBIS data set is large, we employed 3-processor MPI for each Gibbs sampler, which ran in half the time compared to using a single processor. Table 4 shows the results on the FBIS data set. Our model outperforms the baseline on both test sets by about 1 BLEU. This provides evidence that our model performs well in the large data setting, with our rich modelling of context still proving useful. The non-parametric nature of the model allows for rich dynamic backoff behaviour such that it can learn accurate models in both high and low data scenarios. 5.4 Arabic English translation Translation between Chinese and English is very difficult, particularly due to word order differences which are not handled well by phrase-based approaches. In contrast Arabic to English translation needs less reordering, and phrase-based models produce better translations. This translation task is a good test for the generality of our approach. Our Ar-En training data comprises several LDC corpora,6 using the same experimental setup as in Blunsom et al. (2009a). Overall there are 276k sentence pairs and 8.21m and 8.97m words in Arabic and English, respectively. We evaluate on the NIST test sets from 2003 and 2005, and the 2002 test set was used for MERT training. Table 5 shows the results. On all test sets our approach outperforms the baseline, and for the NIST03 test set the improvement is substantial, with a +0.74 BLEU improvement. In general the improvements are more modest than for the Chinese-English results above. We suggest that this is due to the structure of Arabic-English translation better suiting the modelling assumptions behind IBM model 4, particularly its bias towards monotone translations. Consequently the addi6LDC2004E72, LDC2004T17, LDC2004T18, LDC2006T02 F1% NIST02 NIST03 NIST05 baseline 64.9 57.00 48.75 48.93 our model 65.7 57.14 49.49 48.96 Table 5: Translation performance on Arabic to English translation, showing BLEU%. Also shown is word-alignment alignment accuracy. tional context provided by our model is less important. Table 5 also reports alignment results on manually aligned Ar-En sentence pairs,7 measuring the F1 score for the GIZA++ baseline alignments and the alignment from the final sample with our model.8 Our model outperforms the baseline, although the improvement is modest. 6 Conclusions and Future Work This paper proposes a word-based Markov model of translation which correlates translation decisions by conditioning on recent decisions, and incorporates a hierarchical Pitman-Yor process prior permitting elaborate backoff behaviour. The model can learn sequences of translation decisions, akin to phrases in standard phrase-based models, while simultaneously learning word level phenomena. This mechanism generalises the concept of phrases in phrase-based MT, while also capturing richer phenomena such as gapping phrases in the source. Experiments show that our model performs well both on the small and large datasets for two different translation tasks, consistently outperforming a competitive baseline. In this paper the model was only used to infer word alignments; in future work we intend to develop a decoding algorithm for directly translating with the model. Acknowledgements This work was supported by the EPSRC (grant EP/I034750/1). References Phil Blunsom, Trevor Cohn, Chris Dyer, and Miles Osborne. 2009a. A Gibbs sampler for phrasal synchronous grammar induction. In Proc. of ACLIJCNLP, pages 782–790. 7LDC2012T16 8Directional alignments are intersected using the growdiag-final heuristic. 341 Phil Blunsom, Trevor Cohn, Sharon Goldwater, and Mark Johnson. 2009b. A note on the implementation of hierarchical dirichlet processes. In Proc. of ACL-IJCNLP, pages 337–340. Peter E. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19:263–331. David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proc. of ACL, pages 263–270. Josep Maria Crego, Franc¸ois Yvon, and Jos´e B. Mari˜no. 2011. Ncode: an open source bilingual ngram SMT toolkit. Prague Bull. Math. Linguistics, 96:49–58. Nadir Durrani, Helmut Schmid, and Alexander Fraser. 2011. A joint sequence translation model with integrated reordering. In Proc. of ACL:HLT, pages 1045–1054. Michel Galley and Christopher D. Manning. 2010. Accurate non-hierarchical phrase-based translation. In Proc. of NAACL, pages 966–974. Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve DeNeefe, Wei Wang, and Ignacio Thayer. 2006. Scalable inference and training of context-rich syntactic translation models. In Proc. of ACL, pages 961–968. Mark Johnson and Sharon Goldwater. 2009. Improving nonparameteric bayesian inference: experiments on unsupervised word segmentation with adaptor grammars. In Proc. of HLT-NAACL, pages 317–325. Philipp Koehn, Franz J. Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proc. of HLTNAACL, pages 127–133. Philipp Koehn, Hieu Hoang, Alexandra Birch Mayne, Christopher Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proc. of ACL. Hai-Son Le, Alexandre Allauzen, and Franc¸ois Yvon. 2012. Continuous space translation models with neural networks. In Proc. of NAACL, pages 39–48. Yang Liu, Qun Liu, and Shouxun Lin. 2006. Treeto-string alignment template for statistical machine translation. In Proc. of COLING-ACL, pages 609– 616, July. Frans J. Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29:19–51. Frans J. Och. 2003. Minimum error rate training in statistical machine translation. In Proc. of ACL, pages 160–167. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: A method for automatic evaluation of machine translation. In Proc. of ACL, pages 311–318. Jim Pitman and Marc Yor. 1997. The two-parameter poisson-dirichlet distribution derived from a stable subordinator. The Annals of Probability, 25(2):855– 900. Andreas Stolcke. 2002. SRILM: An extensible language modeling toolkit. In Proc. of ICSLP. Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. 2006. Hierarchical Dirichlet processes. Journal of the American Statistical Association, 101(476):1566–1581. Yee Whye Teh. 2006. A hierarchical Bayesian language model based on Pitman-Yor processes. In Proc. of ACL, pages 985–992. Ashish Vaswani, Haitao Mi, Liang Huang, and David Chiang. 2011. Rule markov models for fast tree-tostring translation. In Proc. of ACL, pages 856–864. Stephan Vogel, Hermann Ney, and Christoph Tillmann. 1996. HMM-based word alignment in statistical translation. In Proc. of COLING, pages 836–841. 342
2013
33
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 343–351, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Scaling Semi-supervised Naive Bayes with Feature Marginals Michael R. Lucas and Doug Downey Northwestern University 2133 Sheridan Road Evanston, IL 60208 [email protected] [email protected] Abstract Semi-supervised learning (SSL) methods augment standard machine learning (ML) techniques to leverage unlabeled data. SSL techniques are often effective in text classification, where labeled data is scarce but large unlabeled corpora are readily available. However, existing SSL techniques typically require multiple passes over the entirety of the unlabeled data, meaning the techniques are not applicable to large corpora being produced today. In this paper, we show that improving marginal word frequency estimates using unlabeled data can enable semi-supervised text classification that scales to massive unlabeled data sets. We present a novel learning algorithm, which optimizes a Naive Bayes model to accord with statistics calculated from the unlabeled corpus. In experiments with text topic classification and sentiment analysis, we show that our method is both more scalable and more accurate than SSL techniques from previous work. 1 Introduction Semi-supervised Learning (SSL) is a Machine Learning (ML) approach that utilizes large amounts of unlabeled data, combined with a smaller amount of labeled data, to learn a target function (Zhu, 2006; Chapelle et al., 2006). SSL is motivated by a simple reality: the amount of available machine-readable data is exploding, while human capacity for hand-labeling data for any given ML task remains relatively constant. Experiments in text classification and other domains have demonstrated that by leveraging unlabeled data, SSL techniques improve machine learning performance when human input is limited (e.g., (Nigam et al., 2000; Mann and McCallum, 2010)). However, current SSL techniques have scalability limitations. Typically, for each target concept to be learned, a semi-supervised classifier is trained using iterative techniques that execute multiple passes over the unlabeled data (e.g., Expectation-Maximization (Nigam et al., 2000) or Label Propagation (Zhu and Ghahramani, 2002)). This is problematic for text classification over large unlabeled corpora like the Web: new target concepts (new tasks and new topics of interest) arise frequently, and performing even a single pass over a large corpus for each new target concept is intractable. In this paper, we present a new SSL text classification approach that scales to large corpora. Instead of utilizing unlabeled examples directly for each given target concept, our approach is to precompute a small set of statistics over the unlabeled data in advance. Then, for a given target class and labeled data set, we utilize the statistics to improve a classifier. Specifically, we introduce a method that extends Multinomial Naive Bayes (MNB) to leverage marginal probability statistics P(w) of each word w, computed over the unlabeled data. The marginal statistics are used as a constraint to improve the class-conditional probability estimates P(w|+) and P(w|−) for the positive and negative classes, which are often noisy when estimated over sparse labeled data sets. We refer to the technique as MNB with Frequency Marginals (MNB-FM). In experiments with large unlabeled data sets and sparse labeled data, we find that MNBFM is both faster and more accurate on average than standard SSL methods from previous work, including Label Propagation, MNB with Expectation-Maximization,, and the recent Semisupervised Frequency Estimate (SFE) algorithm (Su et al., 2011). We also analyze how MNB343 FM improves accuracy, and find that surprisingly MNB-FM is especially useful for improving classconditional probability estimates for words that never occur in the training set. The paper proceeds as follows. We formally define the task in Section 2. Our algorithm is defined in Section 3. We present experimental results in Section 4, and analysis in Section 5. We discuss related work in Section 6 and conclude in Section 7 with a discussion of future work. 2 Problem Definition We consider a semi-supervised classification task, in which the goal is to produce a mapping from an instance space X consisting of T-tuples of non-negative integer-valued features w = (w1, . . . , wT ), to a binary output space Y = {−, +}. In particular, our experiments will focus on the case in which the wi’s represent word counts in a given document, in a corpus of vocabulary size T. We assume the following inputs: • A set of zero or more labeled documents DL = {(wd, yd)|d = 1, . . . , n}, drawn i.i.d. from a distribution P(w, y) for w ∈X and y ∈Y. • A large set of unlabeled documents DU = {(wd)|d = n+1, . . . , n+u} drawn from the marginal distribution P(w) = X y P(w, y). The goal of the task is to output a classifer f : X →Y that performs well in predicting the classes of given unlabeled documents. The metrics of evaluation we focus on in our experiments are detailed in Section 4. Our semi-supervised technique utilizes statistics computed over the labeled corpus, denoted as follows. We use N+ w to denote the sum of the occurrences of word w over all documents in the positive class in the labeled data DL. Also, let N+ = Pn w∈DL N+ w be the sum value of all word counts in the labeled positive documents. The count of the remaining words in the positive documents is represented as N+ ¬w = N+ −N+ w . The quantities N−, N− w , and N− ¬w are defined similarly for the negative class. 3 MNB with Feature Marginals We now introduce our algorithm, which scalably utilizes large unlabeled data stores for classification tasks. The technique builds upon the multinomial Naive Bayes model, and is denoted as MNB with Feature Marginals (MNB-FM). 3.1 MNB-FM Method In the text classification setting , each feature value wd represents count of observations of word w in document d. MNB makes the simplifying assumption that word occurrences are conditionally independent of each other given the class (+ or −) of the example. Formally, let the probability P(w|+) of the w in the positive class be denoted as θ+ w. Let P(+) denote the prior probability that a document is of the positive class, and P(−) = 1 −P(+) the prior for the negative class. Then MNB represents the class probability of an example as: P(+|d) = Y w∈d (θ+ w)wdP(+) Y w∈d (θ− w)wdP(−) + Y w∈d (θ+ w)wdP(+) (1) MNB estimates the parameters θ+ w from the corresponding counts in the training set. The maximum-likelihood estimate of θ+ w is N+ w /N +, and to prevent zero-probability estimates we employ “add-1” smoothing (typical in MNB) to obtain the estimate: θ+ w = N+ w + 1 N+ + |T|. After MNB calculates θ+ w and θ− w from the training set for each feature in the feature space, it can then classify test examples using Equation 1. MNB-FM attempts to improve MNB’s estimates of θ+ w and θ− w, using statistics computed over the unlabeled data. Formally, MNB-FM leverages the equality: P(w) = θ+ wPt(+) + θ− wPt(−) (2) The left-hand-side of Equation 2, P(w), represents the probability that a given randomly drawn token from the unlabeled data happens to be the word w. We write Pt(+) to denote the probability that a randomly drawn token (i.e. a word occurrence) from the corpus comes from the positive class. Note that Pt(+) can differ from P(+), the prior probability that a document is positive, due to variations in document length. Pt(−) is defined similarly for the negative class. MNB-FM is motivated by the insight that the left-hand-side of 344 Equation 2 can be estimated in advance, without knowledge of the target class, simply by counting the number of tokens of each word in the unlabeled data. MNB-FM then uses this improved estimate of P(w) as a constraint to improve the MNB parameters on the right-hand-side of Equation 2. We note that Pt(+) and Pt(−), even for a small training set, can typically be estimated reliably— every token in the training data serves as an observation of these quantities. However, for large and sparse feature spaces common in settings like text classification, many features occur in only a small fraction of examples—meaning θ+ w and θ− w must be estimated from only a handful of observations. MNB-FM attempts to improve the noisy estimates θ+ w and θ− w utilizing the robust estimate for P(w) computed over unlabeled data. Specifically, MNB-FM proceeds by assuming the MLEs for P(w) (computed over unlabeled data), Pt(+), and Pt(−) are correct, and reestimates θ+ w and θ− w under the constraint in Equation 2. First, the maximum likelihood estimates of θ+ w and θ− w given the training data DL are: arg max θ+ w,θ− w P(DL|θ+ w, θ− w) = arg max θ+ w,θ− w θ+(N+ w ) w (1 −θ+ w)(N+ ¬w) θ−(N− w ) w (1 −θ− w)(N− ¬w) = arg max θ+ w,θ− w N+ w ln(θ+ w) + N+ ¬w ln(1 −θ+ w)+ N− w ln(θ− w) + N− ¬w ln(1 −θ− w) (3) We can rewrite the constraint in Equation 2 as: θ− w = K −θ+ wL where for compactness we represent: K = P(w) Pt(−); L = Pt(+) Pt(−). Substituting the constraint into Equation 3 shows that we wish to choose θ+ w as: arg max θ+ w N+ w ln(θ+ w) + N+ ¬w ln(1 −θ+ w)+ N− w ln(K −Lθ+ w) + N− ¬w ln(1 −K + Lθ+ w) The optimal values for θ+ w are thus located at the solutions of: 0 = N+ w θ+ w + N+ ¬w θ+ w −1 + LN− w Lθ+ w −K + LN− ¬w Lθ+ w −K + 1 Both θ+ w and θ− w are constrained to valid probabilities in [0,1] when θ+ w ∈[0, K L ]. If N+ ¬w and N− w have non-zero counts, vertical asymptotes exist at 0 and K L and guarantee a solution in this range. Otherwise, a valid solution may not exist. In that case, we default to the add-1 Smoothing estimates used by MNB. Finally, after optimizing the values θ+ w and θ− w for each word w as described above, we normalize the estimates to obtain valid conditional probability distributions, i.e. with P w θ+ w = P w θ− w = 1 3.2 MNB-FM Example The following concrete example illustrates how MNB-FM can improve MNB parameters using the statistic P(w) computed over unlabeled data. The example comes from the Reuters Aptemod text classification task addressed in Section 4, using bag-of-words features for the Earnings class. In one experiment with 10 labeled training examples, we observed 5 positive and 5 negative examples, with the word “resources” occurring three times in the set (once in the positive class, twice in the negative class). MNB uses add-1 smoothing to estimate the conditional probability of the word “resources” in each class as θ+ w = 1+1 216+33504 = 5.93e-5, and θ− w = 2+1 547+33504 = 8.81e-5. Thus, θ+ w θ− w = 0.673 implying that “resources” is a negative indicator of the Earnings class. However, this estimate is inaccurate. In fact, over the full dataset, the parameter values we observe are θ+ w = 93 168549 = 5.70e-4 and θ− w = 263 564717 = 4.65e-4, with a ratio of θ+ w θ− w = 1.223. Thus, in actuality, the word “resources” is a mild positive indicator of the Earnings class. Yet because MNB estimates its parameters from only the sparse training data, it can be inaccurate. The optimization in MNB-FM seeks to accord its parameter estimates with the feature frequency, computed from unlabeled data, of P(w) = 4.89e4. We see that compared with P(w), the θ+ w and θ− w that MNB estimates from the training data are both too low by almost an order of magnitude. Further, the maximum likelihood estimate for θ− w (based on an occurrence count of 2 out of 547 observations) is somewhat more reliable than that for θ+ w (1 of 216 observations). As a result, θ+ w is adjusted upward relatively more than θ− w via MNBFM’s constrained ML estimation. MNB-FM returns θ+ w = 6.52e-5 and θ− w = 6.04e-5. The ratio 345 θ+ w θ− w is 1.079, meaning MNB-FM correctly identifies the word “resources” as an indicator of the positive class. The above example illustrates how MNB-FM can leverage frequency marginal statistics computed over unlabeled data to improve MNB’s conditional probability estimates. We analyze how frequently MNB-FM succeeds in improving MNB’s estimates in practice, and the resulting impact on classification accuracy, below. 4 Experiments In this section, we describe our experiments quantifying the accuracy and scalability of our proposed technique. Across multiple domains, we find that MNB-FM outperforms a variety of approaches from previous work. 4.1 Data Sets We evaluate on two text classification tasks: topic classification, and sentiment detection. In topic classification, the task is to determine whether a test document belongs to a specified topic. We train a classifier separately (i.e., in a binary classification setting) for each topic and measure classification performance for each class individually. The sentiment detection task is to determine whether a document is written with a positive or negative sentiment. In our case, the goal is to determine if the given text belongs to a positive review of a product. 4.1.1 RCV1 The Reuters RCV1 corpus is a standard large corpus used for topic classification evaluations (Lewis et al., 2004). It includes 804,414 documents with several nested target classes. We consider the 5 largest base classes after punctuation and stopwords were removed. The vocabulary consisted of 288,062 unique words, and the total number of tokens in the data set was 99,702,278. Details of the classes can be found in Table 1. 4.1.2 Reuters Aptemod While MNB-FM is designed to improve the scalability of SSL to large corpora, some of the comparison methods from previous work were not tractable on the large topic classification data set RCV1. To evaluate these methods, we also experimented with the Reuters ApteMod dataset (Yang and Liu, 1999), consisting of 10,788 documents belonging to 90 classes. We consider the 10 most Class # Positive CCAT 381327 (47.40%) GCAT 239267 (29.74%) MCAT 204820 (25.46%) ECAT 119920 (14.91%) GPOL 56878 (7.07%) Table 1: RCV1 dataset details Class # Positive Earnings 3964 (36.7%) Acquisitions 2369 (22.0%) Foreign 717 (6.6%) Grain 582 (5.4%) Crude 578 (5.4%) Trade 485 (4.5%) Interest 478 (4.4%) Shipping 286 (2.7%) Wheat 283 (2.6%) Corn 237 (2.2%) Table 2: Aptemod dataset details frequent classes, with varying degrees of positive/negative skew. Punctuation and stopwords were removed during preprocessing. The Aptemod data set contained 33,504 unique words and a total of 733,266 word tokens. Details of the classes can be found in Table 2. 4.1.3 Sentiment Classification Data In the domain of Sentiment Classification, we tested on the Amazon dataset from (Blitzer et al., 2007). Stopwords listed in an included file were ignored for our experiments and we only the considered unigram features. Unlike the two Reuters data sets, each category had a unique set of documents of varying size. For our experiments, we only used the 10 largest categories. Details of the categories can be found in Table 3. In the Amazon Sentiment Classification data set, the task is to determine whether a review is positive or negative based solely on the reviewer’s submitted text. As such, the positive and negative Class # Instances # Positive Vocabulary Music 124362 113997 (91.67%) 419936 Books 54337 47767 (87.91%) 220275 Dvd 46088 39563 (85.84%) 217744 Electronics 20393 15918 (78.06%) 65535 Kitchen 18466 14595 (79.04%) 47180 Video 17389 15017 (86.36%) 106467 Toys 12636 10151 (80.33%) 37939 Apparel 8940 7642 (85.48%) 22326 Health 6507 5124 (78.75%) 24380 Sports 5358 4352 (81.22%) 24237 Table 3: Amazon dataset details 346 labels are equally relevant. For our metrics, we calculate the scores for both the positive and negative class and report the average of the two (in contrast to the Reuters data sets, in which we only report the scores for the positive class). 4.2 Comparison Methods In addition to Multinomial Naive Bayes (discussed in Section 3), we evaluate against a variety of supervised and semi-supervised techniques from previous work, which provide a representation of the state of the art. Below, we detail the comparison methods that we re-implemented for our experiments. 4.2.1 NB + EM We implemented a semi-supervised version of Naive Bayes with Expectation Maximization, based on (Nigam et al., 2000). We found that 15 iterations of EM was sufficient to ensure approximate convergence of the parameters. We also experimented with different weighting factors to assign to the unlabeled data. While performing per-data-split cross-validation was computationally prohibitive for NB+EM, we performed experiments on one class from each data set that revealed weighting unlabeled examples at 1/5 the weight of a labeled example performed best. We found that our re-implementation of NB+EM slightly outperformed published results on a separate data set (Mann and McCallum, 2010), validating our design choices. 4.2.2 Logistic Regression We implemented Logistic Regression using L2Normalization, finding this to outperform L1Normalized and non-normalized versions. The strength of the normalization was selected for each training data set of each size utilized in our experiments. The strength of the normalization in the logistic regression required cross-validation, which we limited to 20 values logarithmically spaced between 10−4 and 104. The optimal value was selected based upon the best average F1 score over the 10 folds. We selected a normalization parameter separately for each subset of the training data during experimentation. 4.2.3 Label Propagation For our large unlabeled data set sizes, we found that a standard Label Propogation (LP) approach, which considers propagating information between all pairs of unlabeled examples, was not tractable. We instead implemented a constrained version of LP for comparison. In our implementation, we limit the number of edges in the propagation graph. Each node propagates to only to its 10 nearest neighbors, where distance is calculated as the cosine distance between the tf-idf representation of two documents. We found the tf-idf weighting to improve performance over that of simple cosine distance. Propagation was run for 100 iterations or until the entropy dropped below a predetermined threshold, whichever occurred first. Even with these aggressive constraints, Label Propagation was intractable to execute on some of the larger data sets, so we do not report LP results for the RCV1 dataset or for the 5 largest Amazon categories. 4.2.4 SFE We also re-implemented a version of the recent Semi-supervised Frequency Estimate approach (Su et al., 2011). SFE was found to outperform MNB and NB+EM in previous work. Consistent with our MNB implementation, we use Add1 Smoothing in our SFE calculations although its use is not specifically mentioned in (Su et al., 2011). SFE also augments multinomial Naive Bayes with the frequency information P(w), although in a manner distinct from MNB-FM. In particular, SFE uses the equality P(+|w) = P(+, w)/P(w) and estimates the rhs using P(w) computed over all the unlabeled data, rather than using only labeled data as in standard MNB. The primary distinction between MNB-FM and SFE is that SFE adjusts sparse estimates P(+, w) in the same way as non-sparse estimates, whereas MNB-FM is designed to adjust sparse estimates more than nonsparse ones. Further, it can be shown that as P(w) of a word w in the unlabeled data becomes larger than that in the labeled data, SFE’s estimate of the ratio P(w|+)/P(w|−) approaches one. Depending on the labeled data, such an estimate can be arbitrarily inaccurate. MNB-FM does not have this limitation. 4.3 Results For each data set, we evaluate on 50 randomly drawn training splits, each comprised of 1,000 randomly selected documents. Each set included at least one positive and one negative document. We 347 Data Set MNB-FM SFE MNB NBEM LProp Logist. Apte (10) 0.306 0.271 0.336 0.306 0.245 0.208 Apte (100) 0.554 0.389 0.222 0.203 0.263 0.330 Apte (1k) 0.729 0.614 0.452 0.321 0.267 0.702 Amzn (10) 0.542 0.524 0.508 0.475 0.470* 0.499 Amzn (100) 0.587 0.559 0.456 0.456 0.498* 0.542 Amzn (1k) 0.687 0.611 0.465 0.455 0.539* 0.713 RCV1 (10) 0.494 0.477 0.387 0.485 0.272 RCV1 (100) 0.677 0.613 0.337 0.470 0.518 RCV1 (1k) 0.772 0.735 0.408 0.491 0.774 * Limited to 5 of 10 Amazon categories Table 4: F1, training size in parentheses respected the order of the training splits such that each sample was a strict subset of any larger training sample of the same split. We evaluate on the standard metric of F1 with respect to the target class. For Amazon, in which both the “positive” and “negative” classes are potential target classes, we evaluate using macroaveraged scores. The primary results of our experiments are shown in Table 4. The results show that MNB-FM improves upon the MNB classifier substantially, and also tends to outperform the other SSL and supervised learning methods we evaluated. MNBFM is the best performing method over all data sets when the labeled data is limited to 10 and 100 documents, except for training sets of size 10 in Aptemod, where MNB has a slight edge. Tables 5 and 6 present detailed results of the experiments on the RCV1 data set. These experiments are limited to the 5 largest base classes and show the F1 performance of MNB-FM and the various comparison methods, excluding Label Propagation which was intractable on this data set. Class MNB-FM SFE MNB NBEM Logist. CCAT 0.641 0.643 0.580 0.639 0.532 GCAT 0.639 0.686 0.531 0.732 0.466 MCAT 0.572 0.505 0.393 0.504 0.225 ECAT 0.306 0.267 0.198 0.224 0.096 GPOL 0.313 0.283 0.233 0.326 0.043 Average 0.494 0.477 0.387 0.485 0.272 Table 5: RCV1: F1, |DL|= 10 Class MNB-FM SFE MNB NBEM Logist. CCAT 0.797 0.793 0.624 0.713 0.754 GCAT 0.849 0.848 0.731 0.837 0.831 MCAT 0.776 0.737 0.313 0.516 0.689 ECAT 0.463 0.317 0.017 0.193 0.203 GPOL 0.499 0.370 0.002 0.089 0.114 Average 0.677 0.613 0.337 0.470 0.518 Table 6: RCV1: F1, |DL|= 100 Method 1000 5000 10k 50k 100k MNB-FM 1.44 1.61 1.69 2.47 5.50 NB+EM 2.95 3.43 4.93 10.07 16.90 MNB 1.15 1.260 1.40 2.20 3.61 Labelprop 0.26 4.17 10.62 67.58 Table 7: Runtimes of SSL methods (sec.) The runtimes of our methods can be seen in Table 7. The results show the runtimes of the SSL methods discussed in this paper as the size of the unlabeled dataset grows. As expected, we find that MNB-FM has runtime similar to MNB, and scales much better than methods that take multiple passes over the unlabeled data. 5 Analysis From our experiments, it is clear that the performance of MNB-FM improves on MNB, and in many cases outperforms all existing SSL algorithms we evaluated. MNB-FM improves the conditional probability estimates in MNB and, surprisingly, we found that it can often improve these estimates for words that do not even occur in the training set. Tables 8 and 9 show the details of the improvements MNB-FM makes on the feature marginal estimates. We ran MNB-FM and MNB on the RCV1 class MCAT and stored the computed feature marginals for direct comparison. For each word in the vocabulary, we compared each classifier’s conditional probability ratios, i.e. θ+/θ−, to the true value over the entire data set. We computed which classifier was closer to the correct ratio for each word. These results were averaged over 5 iterations. From the data, we can see that MNB-FM improves the estimates for many words not seen in the training set as well as the most common words, even with small training sets. 5.1 Ranking Performance We also analyzed how well the different methods rank, rather than classify, the test documents. We evaluated ranking using the R-precision metric, equal to the precision (i.e. fraction of positive documents classified correctly) of the R highestranked test documents, where R is the total number of positive test documents. Logistic Regression performed particularly well on the R-Precision Metric, as can be seen in Tables 10, 11, and 12. Logistic Regression performed less well in the F1 metric. We find that NB+EM 348 Fraction Improved vs MNB Avg Improvement vs MNB Probability Mass Word Freq. Known Half Known Unknown Known Half Known Unknown Known Half Known Unknown 0-10−6 0.165 0.847 -0.805 0.349 0.02% 7.69% 10−6-10−5 0.200 0.303 0.674 0.229 -0.539 0.131 0.00% 0.54% 14.77% 10−5-10−4 0.322 0.348 0.592 -0.597 -0.424 0.025 0.74% 10.57% 32.42% 10−4-10−3 0.533 0.564 0.433 0.014 0.083 -0.155 7.94% 17.93% 7.39% > 10−3 Table 8: Analysis of Feature Marginal Improvement of MNB-FM over MNB (|DL| = 10). “Known” indicates words occurring in both positive and negative training examples, “Half Known” indicates words occurring in only positive or negative training examples, while “Unknown” indicates words that never occur in labelled examples. Data is for the RCV1 MCAT category. MNB-FM improves estimates by a substantial amount for unknown words and also the most common known and half-known words. Fraction Improved vs MNB Avg Improvement vs MNB Probability Mass Word Freq. Known Half Known Unknown Known Half Known Unknown Known Half Known Unknown 0-10−6 0.567 0.243 0.853 0.085 -0.347 0.143 0.00% 0.22% 7.49% 10−6-10−5 0.375 0.310 0.719 -0.213 -0.260 0.087 0.38% 4.43% 10.50% 10−5-10−4 0.493 0.426 0.672 -0.071 -0.139 0.067 18.68% 20.37% 4.67% 10−4-10−3 0.728 0.669 0.233 0.018 31.70% 1.56% > 10−3 Table 9: Analysis of Feature Marginal Improvement of MNB-FM over MNB (|DL| = 100). Data is for the RCV1 MCAT category (see Table 8). MNB-FM improves estimates by a substantial amount for unknown words and also the most common known and half-known words. performs particularly well on the R-precision metric on ApteMod, suggesting that its modelling assumptions are more accurate for that particular data set (NB+EM performs significantly worse on the other data sets, however). MNB-FM performs essentially equivalently well, on average, to the best competing method (Logistic Regression) on the large RCV1 data set. However, these experiments show that MNB-FM offers more advantages in document classification than in document ranking. The ranking results show that LR may be preferred when ranking is important. However, LR underperforms in classification tasks (in terms of F1, Tables 4-6). The reason for this is that LR’s learned classification threshold becomes less accurate when datasets are small and classes are highly Class MNB-FM SFE MNB NBEM LProp Logist. Apte (10) 0.353 0.304 0.359 0.631 0.490 0.416 Apte (100) 0.555 0.421 0.343 0.881 0.630 0.609 Apte (1k) 0.723 0.652 0.532 0.829 0.754 0.795 Amzn (10) 0.536 0.527 0.516 0.481 0.535* 0.544 Amzn (100) 0.614 0.562 0.517 0.480 0.573* 0.639 Amzn (1k) 0.717 0.650 0.562 0.483 0.639* 0.757 RCV1 (10) 0.505 0.480 0.421 0.450 0.512 RCV1 (100) 0.683 0.614 0.474 0.422 0.689 RCV1 (1k) 0.781 0.748 0.535 0.454 0.802 * Limited to 5 of 10 Amazon categories Table 10: R-Precision, training size in parentheses skewed. In these cases, LR classifies too frequently in favor of the larger class which is detrimental to its performance. This effect is visible in Tables 5 and 6, where LR’s performance significantly drops for the ECAT and GPOL classes. ECAT and GPOL represent only 14.91% and 7.07% of the RCV1 dataset, respectively. 6 Related Work To our knowledge, MNB-FM is the first approach that utilizes a small set of statistics computed over Data SetMNB-FM SFE MNB NBEM Logist. CCAT 0.637 0.631 0.620 0.498 0.653 GCAT 0.663 0.711 0.600 0.792 0.671 MCAT 0.580 0.492 0.477 0.510 0.596 ECAT 0.291 0.217 0.214 0.111 0.297 GPOL 0.354 0.352 0.193 0.341 0.341 Average 0.505 0.480 0.421 0.450 0.512 Table 11: RCV1: R-Precision, DL= 10 Class MNB-FM SFE MNB NBEM Logist. CCAT 0.805 0.797 0.765 0.533 0.809 GCAT 0.849 0.858 0.780 0.869 0.843 MCAT 0.782 0.753 0.579 0.533 0.774 ECAT 0.471 0.293 0.203 0.119 0.498 GPOL 0.509 0.370 0.042 0.056 0.520 Average 0.683 0.614 0.474 0.422 0.689 Table 12: RCV1: R-Precision, DL= 100 349 a large unlabeled data set as constraints to improve a semi-supervised classifier. Our experiments demonstrate that MNB-FM outperforms previous approaches across multiple text classification techniques including topic classification and sentiment analysis. Further, the MNB-FM approach offers scalability advantages over most existing semi-supervised approaches. Current popular Semi-Supervised Learning approaches include using Expectation-Maximization on probabilistic models (e.g. (Nigam et al., 2000)); Transductive Support Vector Machines (Joachims, 1999); and graph-based methods such as Label Propagation (LP) (Zhu and Ghahramani, 2002) and their more recent, more scalable variants (e.g. identifying a small number of representative unlabeled examples (Liu et al., 2010)). In general, these techniques require passes over the entirety of the unlabeled data for each new learning task, intractable for massive unlabeled data sets. Naive implementations of LP cannot scale to large unlabeled data sets, as they have time complexity that increases quadratically with the number of unlabeled examples. Recent LP techniques have achieved greater scalability through the use of parallel processing and heuristics such as Approximate-Nearest Neighbor (Subramanya and Bilmes, 2009), or by decomposing the similarity matrix (Lin and Cohen, 2011). Our approach, by contrast, is to pre-compute a small set of marginal statistics over the unlabeled data, which eliminates the need to scan unlabeled data for each new task. Instead, the complexity of MNB-FM is proportional only to the number of unique words in the labeled data set. In recent work, Su et al. propose the Semisupervised Frequency Estimate (SFE), which like MNB-FM utilizes the marginal probabilities of features computed from unlabeled data to improve the Multinomial Naive Bayes (MNB) classifier (Su et al., 2011). SFE has the same scalability advantages as MNB-FM. However, unlike our approach, SFE does not compute maximumlikelihood estimates using the marginal statistics as a constraint. Our experiments show that MNBFM substantially outperforms SFE. A distinct method for pre-processing unlabeled data in order to help scale semi-supervised learning techniques involves dimensionality reduction or manifold learning (Belkin and Niyogi, 2004), and for NLP tasks, identifying word representations from unlabeled data (Turian et al., 2010). In contrast to these approaches, MNB-FM preserves the original feature set and is more scalable (the marginal statistics can be computed in a single pass over the unlabeled data set). 7 Conclusion We presented a novel algorithm for efficiently leveraging large unlabeled data sets for semisupervised learning. Our MNB-FM technique optimizes a Multinomial Naive Bayes model to accord with statistics of the unlabeled corpus. In experiments across topic classification and sentiment analysis, MNB-FM was found to be more accurate and more scalable than several supervised and semi-supervised baselines from previous work. In future work, we plan to explore utilizing richer statistics from the unlabeled data, beyond word marginals. Further, we plan to experiment with techniques for unlabeled data sets that also include continuous-valued features. Lastly, we also wish to explore ensemble approaches that combine the best supervised classifiers with the improved class-conditional estimates provided by MNB-FM. 8 Acknowledgements This work was supported in part by DARPA contract D11AP00268. References Mikhail Belkin and Partha Niyogi. 2004. Semisupervised learning on riemannian manifolds. Machine Learning, 56(1):209–239. John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In Association for Computational Linguistics, Prague, Czech Republic. O. Chapelle, B. Sch¨olkopf, and A. Zien, editors. 2006. Semi-Supervised Learning. MIT Press, Cambridge, MA. Thorsten Joachims. 1999. Transductive inference for text classification using support vector machines. In Proceedings of the Sixteenth International Conference on Machine Learning, ICML ’99, pages 200– 209, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. David D Lewis, Yiming Yang, Tony G Rose, and Fan Li. 2004. Rcv1: A new benchmark collection for text categorization research. The Journal of Machine Learning Research, 5:361–397. 350 Frank Lin and William W Cohen. 2011. Adaptation of graph-based semi-supervised methods to largescale text data. In The 9th Workshop on Mining and Learning with Graphs. Wei Liu, Junfeng He, and Shih-Fu Chang. 2010. Large graph construction for scalable semi-supervised learning. In ICML, pages 679–686. Gideon S. Mann and Andrew McCallum. 2010. Generalized expectation criteria for semi-supervised learning with weakly labeled data. J. Mach. Learn. Res., 11:955–984, March. Kamal Nigam, Andrew Kachites McCallum, Sebastian Thrun, and Tom Mitchell. 2000. Text classification from labeled and unlabeled documents using em. Mach. Learn., 39(2-3):103–134, May. Jiang Su, Jelber Sayyad Shirab, and Stan Matwin. 2011. Large scale text classification using semisupervised multinomial naive bayes. In Lise Getoor and Tobias Scheffer, editors, ICML, pages 97–104. Omnipress. Amar Subramanya and Jeff A. Bilmes. 2009. Entropic graph regularization in non-parametric semisupervised classification. In Neural Information Processing Society (NIPS), Vancouver, Canada, December. Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: a simple and general method for semi-supervised learning. Urbana, 51:61801. Yiming Yang and Xin Liu. 1999. A re-examination of text categorization methods. In Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval, pages 42–49. ACM. X. Zhu and Z. Ghahramani. 2002. Learning from labeled and unlabeled data with label propagation. Technical report, Technical Report CMU-CALD02-107, Carnegie Mellon University. Xiaojin Zhu. 2006. Semi-supervised learning literature survey. 351
2013
34
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 352–361, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Learning Latent Personas of Film Characters David Bamman Brendan O’Connor Noah A. Smith School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213, USA {dbamman,brenocon,nasmith}@cs.cmu.edu Abstract We present two latent variable models for learning character types, or personas, in film, in which a persona is defined as a set of mixtures over latent lexical classes. These lexical classes capture the stereotypical actions of which a character is the agent and patient, as well as attributes by which they are described. As the first attempt to solve this problem explicitly, we also present a new dataset for the text-driven analysis of film, along with a benchmark testbed to help drive future work in this area. 1 Introduction Philosophers and dramatists have long argued whether the most important element of narrative is plot or character. Under a classical Aristotelian perspective, plot is supreme;1 modern theoretical dramatists and screenwriters disagree.2 Without addressing this debate directly, much computational work on narrative has focused on learning the sequence of events by which a story is defined; in this tradition we might situate seminal work on learning procedural scripts (Schank and Abelson, 1977; Regneri et al., 2010), narrative chains (Chambers and Jurafsky, 2008), and plot structure (Finlayson, 2011; Elsner, 2012; McIntyre and Lapata, 2010; Goyal et al., 2010). We present a complementary perspective that addresses the importance of character in defining 1“Dramatic action ...is not with a view to the representation of character: character comes in as subsidiary to the actions . . . The Plot, then, is the first principle, and, as it were, the soul of a tragedy: Character holds the second place.” Poetics I.VI (Aristotle, 335 BCE). 2“Aristotle was mistaken in his time, and our scholars are mistaken today when they accept his rulings concerning character. Character was a great factor in Aristotle’s time, and no fine play ever was or ever will be written without it” (Egri, 1946, p. 94); “What the reader wants is fascinating, complex characters” (McKee, 1997, 100). a story. Our testbed is film. Under this perspective, a character’s latent internal nature drives the action we observe. Articulating narrative in this way leads to a natural generative story: we first decide that we’re going to make a particular kind of movie (e.g., a romantic comedy), then decide on a set of character types, or personas, we want to see involved (the PROTAGONIST, the LOVE INTEREST, the BEST FRIEND). After picking this set, we fill out each of these roles with specific attributes (female, 28 years old, klutzy); with this cast of characters, we then sketch out the set of events by which they interact with the world and with each other (runs but just misses the train, spills coffee on her boss) – through which they reveal to the viewer those inherent qualities about themselves. This work is inspired by past approaches that infer typed semantic arguments along with narrative schemas (Chambers and Jurafsky, 2009; Regneri et al., 2011), but seeks a more holistic view of character, one that learns from stereotypical attributes in addition to plot events. This work also naturally draws on earlier work on the unsupervised learning of verbal arguments and semantic roles (Pereira et al., 1993; Grenager and Manning, 2006; Titov and Klementiev, 2012) and unsupervised relation discovery (Yao et al., 2011). This character-centric perspective leads to two natural questions. First, can we learn what those standard personas are by how individual characters (who instantiate those types) are portrayed? Second, can we learn the set of attributes and actions by which we recognize those common types? How do we, as viewers, recognize a VILLIAN? At its most extreme, this perspective reduces to learning the grand archetypes of Joseph Campbell (1949) or Carl Jung (1981), such as the HERO or TRICKSTER. We seek, however, a more finegrained set that includes not only archetypes, but stereotypes as well – characters defined by a fixed set of actions widely known to be representative of 352 a class. This work offers a data-driven method for answering these questions, presenting two probablistic generative models for inferring latent character types. This is the first work that attempts to learn explicit character personas in detail; as such, we present a new dataset for character type induction in film and a benchmark testbed for evaluating future work.3 2 Data 2.1 Text Our primary source of data comes from 42,306 movie plot summaries extracted from the November 2, 2012 dump of English-language Wikipedia.4 These summaries, which have a median length of approximately 176 words,5 contain a concise synopsis of the movie’s events, along with implicit descriptions of the characters (e.g., “rebel leader Princess Leia,” “evil lord Darth Vader”). To extract structure from this data, we use the Stanford CoreNLP library6 to tag and syntactically parse the text, extract entities, and resolve coreference within the document. With this structured representation, we extract linguistic features for each character, looking at immediate verb governors and attribute syntactic dependencies to all of the entity’s mention headwords, extracted from the typed dependency tuples produced by the parser; we refer to “CCprocessed” syntactic relations described in de Marneffe and Manning (2008): • Agent verbs. Verbs for which the entity is an agent argument (nsubj or agent). • Patient verbs. Verbs for which the entity is the patient, theme or other argument (dobj, nsubjpass, iobj, or any prepositional argument prep *). • Attributes. Adjectives and common noun words that relate to the mention as adjectival modifiers, noun-noun compounds, appositives, or copulas (nsubj or appos governors, or nsubj, appos, amod, nn dependents of an entity mention). 3All datasets and software for replication can be found at http://www.ark.cs.cmu.edu/personas. 4http://dumps.wikimedia.org/enwiki/ 5More popular movies naturally attract more attention on Wikipedia and hence more detail: the top 1,000 movies by box office revenue have a median length of 715 words. 6http://nlp.stanford.edu/software/ corenlp.shtml These three roles capture three different ways in which character personas are revealed: the actions they take on others, the actions done to them, and the attributes by which they are described. For every character we thus extract a bag of (r, w) tuples, where w is the word lemma and r is one of {agent verb, patient verb, attribute} as identified by the above rules. 2.2 Metadata Our second source of information consists of character and movie metadata drawn from the November 4, 2012 dump of Freebase.7 At the movie level, this includes data on the language, country, release date and detailed genre (365 non-mutually exclusive categories, including “Epic Western,” “Revenge,” and “Hip Hop Movies”). Many of the characters in movies are also associated with the actors who play them; since many actors also have detailed biographical information, we can ground the characters in what we know of those real people – including their gender and estimated age at the time of the movie’s release (the difference between the release date of the movie and the actor’s date of birth). Across all 42,306 movies, entities average 3.4 agent events, 2.0 patient events, and 2.1 attributes. For all experiments described below, we restrict our dataset to only those events that are among the 1,000 most frequent overall, and only characters with at least 3 events. 120,345 characters meet this criterion; of these, 33,559 can be matched to Freebase actors with a specified gender, and 29,802 can be matched to actors with a given date of birth. Of all actors in the Freebase data whose age is given, the average age at the time of movie is 37.9 (standard deviation 14.1); of all actors whose gender is known, 66.7% are male.8 The age distribution is strongly bimodal when conditioning on gender: the average age of a female actress at the time of a movie’s release is 33.0 (s.d. 13.4), while that of a male actor is 40.5 (s.d. 13.7). 3 Personas One way we recognize a character’s latent type is by observing the stereotypical actions they 7http://download.freebase.com/ datadumps/ 8Whether this extreme 2:1 male/female ratio reflects an inherent bias in film or a bias in attention on Freebase (or Wikipedia, on which it draws) is an interesting research question in itself. 353 perform (e.g., VILLAINS strangle), the actions done to them (e.g., VILLAINS are foiled and arrested) and the words by which they are described (VILLAINS are evil). To capture this intuition, we define a persona as a set of three typed distributions: one for the words for which the character is the agent, one for which it is the patient, and one for words by which the character is attributively modified. Each distribution ranges over a fixed set of latent word classes, or topics. Figure 1 illustrates this definition for a toy example: a ZOMBIE persona may be characterized as being the agent of primarily eating and killing actions, the patient of killing actions, and the object of dead attributes. The topic labeled eat may include words like eat, drink, and devour. eat kill love dead happy agent 0.0 0.2 0.4 0.6 0.8 1.0 eat kill love dead happy patient 0.0 0.2 0.4 0.6 0.8 1.0 eat kill love dead happy attribute 0.0 0.2 0.4 0.6 0.8 1.0 Figure 1: A persona is a set of three distributions over latent topics. In this toy example, the ZOMBIE persona is primarily characterized by being the agent of words from the eat and kill topics, the patient of kill words, and the object of words from the dead topic. 4 Models Both models that we present here simultaneously learn three things: 1.) a soft clustering over words to topics (e.g., the verb “strangle” is mostly a type of Assault word); 2.) a soft clustering over topics to personas (e.g., VILLIANS perform a lot of Assault actions); and 3.) a hard clustering over characters to personas (e.g., Darth Vader is a VILLAIN.) They each use different evidence: since our data includes not only textual features (in the form of actions and attributes of the characters) but also non-textual information (such as movie genre, age and gender), we design a model that exploits this additional source of information in discriminating between character types; since this extralinguistic information may not always be available, we also design a model that learns only from the text itself. We present the text-only model first α θ p z ψ w r φ γ ν W E D α p me md β µ σ2 z ψ w r φ γ ν W E D P Number of personas (hyperparameter) K Number of word topics (hyperparameter) D Number of movie plot summaries E Number of characters in movie d W Number of (role, word) tuples used by character e φk Topic k’s distribution over V words. r Tuple role: agent verb, patient verb, attribute ψp,r Distribution over topics for persona p in role r θd Movie d’s distribution over personas pe Character e’s persona (integer, p ∈{1..P}) j A specific (r, w) tuple in the data zj Word topic for tuple j wj Word for tuple j α Concentration parameter for Dirichlet model β Feature weights for regression model µ, σ2 Gaussian mean and variance (for regularizing β) md Movie features (from movie metadata) me Entity features (from movie actor metadata) νr, γ Dirichlet concentration parameters Figure 2: Above: Dirichlet persona model (left) and persona regression model (right). Bottom: Definition of variables. for simplicity. Throughout, V is the word vocabulary size, P is the number of personas, and K is the number of topics. 4.1 Dirichlet Persona Model In the most basic model, we only use information from the structured text, which comes as a bag of (r, w) tuples for each character in a movie, where w is the word lemma and r is the relation of the word with respect to the character (one of agent verb, patient verb or attribute, as outlined in §2.1 above). The generative story runs as follows. First, let there be K latent word topics; as in LDA (Blei et al., 2003), these are words that will be soft-clustered together by virtue of appearing in similar contexts. Each latent word cluster 354 φk ∼Dir(γ) is a multinomial over the V words in the vocabulary, drawn from a Dirichlet parameterized by γ. Next, let a persona p be defined as a set of three multinomials ψp over these K topics, one for each typed role r, each drawn from a Dirichlet with a role-specific hyperparameter (νr). Every document (a movie plot summary) contains a set of characters, each of which is associated with a single latent persona p; for every observed (r, w) tuple associated with the character, we sample a latent topic k from the role-specific ψp,r. Conditioned on this topic assignment, the observed word is drawn from φk. The distribution of these personas for a given document is determined by a document-specific multinomial θ, drawn from a Dirichlet parameterized by α. Figure 2 (above left) illustrates the form of the model. To simplify inference, we collapse out the persona-topic distributions ψ, the topic-word distributions φ and the persona distribution θ for each document. Inference on the remaining latent variables – the persona p for each character type and the topic z for each word associated with that character – is conducted via collapsed Gibbs sampling (Griffiths and Steyvers, 2004); at each iteration, for each character e, we sample their persona pe: P(pe = k | p−e, z, α, ν) ∝  c−e d,k + αk  × Q j (c−e rj,k,zj +νrj ) (c−e rj,k,⋆+Kνrj ) (1) Here, c−e d,k is the count of all characters in document d whose current persona sample is also k (not counting the current character e under consideration);9 j ranges over all (rj, wj) tuples associated with character e. Each c−e rj,k,zj is the count of all tuples with role rj and current topic zj used with persona k. c−e rj,k,⋆is the same count, summing over all topics z. In other words, the probability that character e embodies persona k is proportional to the number of other characters in the plot summary who also embody that persona (plus the Dirichlet hyperparameter αk) times the contribution of each observed word wj for that character, given its current topic assignment zj. Once all personas have been sampled, we sam9The −e superscript denotes counts taken without considering the current sample for character e. ple the latent topics for each tuple as the following. P(zj = k | p, z−j, w, r, ν, γ) ∝ (c−j rj,p,k+νrj ) (c−j rj,p,⋆+Kνrj ) × (c−j k,wj +γ) (c−j k,⋆+V γ) (2) Here, conditioned on the current sample p for the character’s persona, the probability that tuple j originates in topic k is proportional to the number of other tuples with that same role rj drawn from the same topic for that persona (c−j rj,p,k), normalized by the number of other rj tuples associated with that persona overall (c−j rj,p,⋆), multiplied by the number of times word wj is associated with that topic (c−j k,wj) normalized by the total number of other words associated with that topic overall (c−j k,⋆). We optimize the values of the Dirichlet hyperparameters α, ν and γ using slice sampling with a uniform prior every 20 iterations for the first 500 iterations, and every 100 iterations thereafter. After a burn-in phase of 10,000 iterations, we collect samples every 10 iterations (to lessen autocorrelation) until a total of 100 have been collected. 4.2 Persona Regression To incorporate observed metadata in the form of movie genre, character age and character gender, we adopt an “upstream” modeling approach (Mimno and McCallum, 2008), letting those observed features influence the conditional probability with which a given character is expected to assume a particular persona, prior to observing any of their actions. This captures the increased likelihood, for example, that a 25-year-old male actor in an action movie will play an ACTION HERO than he will play a VALLEY GIRL. To capture these effects, each character’s latent persona is no longer drawn from a documentspecific Dirichlet; instead, the P-dimensional simplex is the output of a multiclass logistic regression, where the document genre metadata md and the character age and gender metadata me together form a feature vector that combines with personaspecific feature weights to form the following loglinear distribution over personas, with the probability for persona k being: P(p = k | md, me, β) = exp([md;me]⊤βk) 1+PP −1 j=1 exp([md;me]⊤βj) (3) The persona-specific β coefficients are learned through Monte Carlo Expectation Maximization 355 (Wei and Tanner, 1990), in which we alternate between the following: 1. Given current values for β, for all characters e in all plot summaries, sample values of pe and zj for all associated tuples. 2. Given input metadata features m and the associated sampled values of p, find the values of β that maximize the standard multiclass logistic regression log likelihood, subject to ℓ2 regularization. Figure 2 (above right) illustrates this model. As with the Dirichlet persona model, inference on p for step 1 is conducted with collapsed Gibbs sampling; the only difference in the sampling probability from equation 1 is the effect of the prior, which here is deterministically fixed as the output of the regression. P(pe = k | p−e, z, ν, md, me, β) ∝ exp([md; me]⊤βk) × Q j (c−e rj,k,zj +νrj ) (c−e rj,k,⋆+Kνrj ) (4) The sampling equation for the topic assignments z is identical to that in equation 2. In practice we optimize β every 1,000 iterations, until a burn-in phase of 10,000 iterations has been reached; at this point we following the same sampling regime as for the Dirichlet persona model. 5 Evaluation We evaluate our methods in two quantitative ways by measuring the degree to which we recover two different sets of gold-standard clusterings. This evaluation also helps offer guidance for model selection (in choosing the number of latent topics and personas) by measuring performance on an objective task. 5.1 Character Names First, we consider all character names that occur in at least two separate movies, generally as a consequence of remakes or sequels; this includes proper names such as “Rocky Balboa,” “Oliver Twist,” and “Indiana Jones,” as well as generic type names such as “Gang Member” and “The Thief”; to minimize ambiguity, we only consider character names consisting of at least two tokens. Each of these names is used by at least two different characters; for example, a character named “Jason Bourne” is portrayed in The Bourne Identity, The Bourne Supremacy, and The Bourne Ultimatum. While these characters are certainly free to assume different roles in different movies, we believe that, in the aggregate, they should tend to embody the same character type and thus prove to be a natural clustering to recover. 970 character names occur at least twice in our data, and 2,666 individual characters use one of those names. Let those 970 character names define 970 unique gold clusters whose members include the individual characters who use that name. 5.2 TV Tropes As a second external measure of validation, we consider a manually created clustering presented at the website TV Tropes,10 a wiki that collects user-submitted examples of common tropes (narrative, character and plot devices) found in television, film, and fiction, among other media. While TV Tropes contains a wide range of such conventions, we manually identified a set of 72 tropes that could reasonably be labeled character types, including THE CORRUPT CORPORATE EXECUTIVE, THE HARDBOILED DETECTIVE, THE JERK JOCK, THE KLUTZ and THE SURFER DUDE. We manually aligned user-submitted examples of characters embodying these 72 character types with the canonical references in Freebase to create a test set of 501 individual characters. While the 72 character tropes represented here are a more subjective measure, we expect to be able to at least partially recover this clustering. 5.3 Variation of Information To measure the similarity between the two clusterings of movie characters, gold clusters G and induced latent persona clusters C, we calculate the variation of information (Meil˘a, 2007): V I(G, C) = H(G) + H(C) −2I(G, C) (5) = H(G|C) + H(C|G) (6) VI measures the information-theoretic distance between the two clusterings: a lower value means greater similarity, and VI = 0 if they are identical. Low VI indicates that (induced) clusters and (gold) clusters tend to overlap; i.e., knowing a character’s (induced) cluster usually tells us their (gold) cluster, and vice versa. Variation of information is a metric (symmetric and obeys triangle 10http://tvtropes.org 356 Character Names §5.1 TV Tropes §5.2 K Model P = 25 P = 50 P = 100 P = 25 P = 50 P = 100 25 Persona regression 7.73 7.32 6.79 6.26 6.13 5.74 Dirichlet persona 7.83 7.11 6.44 6.29 6.01 5.57 50 Persona regression 7.59 7.08 6.46 6.30 5.99 5.65 Dirichlet persona 7.57 7.04 6.35 6.23 5.88 5.60 100 Persona regression 7.58 6.95 6.32 6.11 6.05 5.49 Dirichlet persona 7.64 6.95 6.25 6.24 5.91 5.42 Table 1: Variation of information between learned personas and gold clusters for different numbers of topics K and personas P. Lower values are better. All values are reported in bits. Character Names §5.1 TV Tropes §5.2 K Model P = 25 P = 50 P = 100 P = 25 P = 50 P = 100 25 Persona regression 62.8 (↑41%) 59.5 (↑40%) 53.7 (↑33%) 42.3 (↑31%) 38.5 (↑24%) 33.1 (↑25%) Dirichlet persona 54.7 (↑27%) 50.5 (↑26%) 45.4 (↑17%) 39.5 (↑20%) 31.7 (↑28%) 25.1 (↑21%) 50 Persona regression 63.1 (↑42%) 59.8 (↑42%) 53.6 (↑34%) 42.9 (↑30%) 39.1 (↑33%) 31.3 (↑20%) Dirichlet persona 57.2 (↑34%) 49.0 (↑23%) 44.7 (↑16%) 39.7 (↑30%) 31.5 (↑32%) 24.6 (↑22%) 100 Persona regression 63.1 (↑42%) 57.7 (↑39%) 53.0 (↑34%) 43.5 (↑33%) 32.1 (↑28%) 26.5 (↑22%) Dirichlet persona 55.3 (↑30%) 49.5 (↑24%) 45.2 (↑18%) 39.7 (↑34%) 29.9 (↑24%) 23.6 (↑19%) Table 2: Purity scores of recovering gold clusters. Higher values are better. Each absolute purity score is paired with its improvement over a controlled baseline of permuting the learned labels while keeping the cluster proportions the same. inequality), and has a number of other desirable properties. Table 1 presents the VI between the learned persona clusters and gold clusters, for varying numbers of personas (P = {25, 50, 100}) and topics (K = {25, 50, 100}). To determine significance with respect to a random baseline, we conduct a permutation test (Fisher, 1935; Pitman, 1937) in which we randomly shuffle the labels of the learned persona clusters and count the number of times in 1,000 such trials that the VI of the observed persona labels is lower than the VI of the permuted labels; this defines a nonparametric p-value. All results presented are significant at p < 0.001 (i.e. observed VI is never lower than the simulation VI). Over all tests in comparison to both gold clusterings, we see VI improve as both P and, to a lesser extent, K increase. While this may be expected as the number of personas increase to match the number of distinct types in the gold clusters (970 and 72, respectively), the fact that VI improves as the number of latent topics increases suggests that more fine-grained topics are helpful for capturing nuanced character types.11 The difference between the persona regression model and the Dirichlet persona model here is not 11This trend is robust to the choice of cluster metric: here VI and F-score have a correlation of −0.87; as more latent topics and personas are added, clustering improves (causing the F-score to go up and the VI distance to go down). significant; while VI allows us to compare models with different numbers of latent clusters, its requirement that clusterings be mutually informative places a high overhead on models that are fundamentally unidirectional (in Table 1, for example, the room for improvement between two models of the same P and K is naturally smaller than the bigger difference between different P or K). While we would naturally prefer a text-only model to be as expressive as a model that requires potentially hard to acquire metadata, we tease apart whether a distinction actually does exist by evaluating the purity of the gold clusters with respect to the labels assigned them. 5.4 Purity For gold clusters G = {g1 . . . gk} and inferred clusters C = {c1 . . . cj} we calculate purity as: Purity = 1 N X k max j |gk ∩cj| (7) While purity cannot be used to compare models of different persona size P, it can help us distinguish between models of the same size. A model can attain perfect purity, however, by placing all characters into a single cluster; to control for this, we present a controlled baseline in which each character is assigned a latent character type label proportional to the size of the latent clusters we have learned (so that, for example, if one latent persona cluster contains 3.2% of the total characters, 357 Batman Jim Gordon dark, major, henchman shoot, aim, overpower sentence, arrest, assign Tony Stark Jason Bourne The Joker shoot, aim, overpower testify, rebuff, confess hatch, vow, undergo Van Helsing Colin Sullivan Dracula The Departed The Dark Knight Iron Man The Bourne Identity approve, die, suffer relent, refuse, agree inherit live imagine Jack Dawson Rachel Titanic Figure 3: Dramatis personae of The Dark Knight (2008), illustrating 3 of the 100 character types learned by the persona regression model, along with links from other characters in those latent classes to other movies. Each character type is listed with the top three latent topics with which it is associated. the probability of selecting that persona at random is 3.2%). Table 2 presents each model’s absolute purity score paired with its improvement over its controlled permutation (e.g., ↑41%). Within each fixed-size partition, the use of metadata yields a substantial improvement over the Dirichlet model, both in terms of absolute purity and in its relative improvement over its sizedcontrolled baseline. In practice, we find that while the Dirichlet model distinguishes between character personas in different movies, the persona regression model helps distinguish between different personas within the same movie. 6 Exploratory Data Analysis As with other generative approaches, latent persona models enable exploratory data analysis. To illustrate this, we present results from the persona regression model learned above, with 50 latent lexical classes and 100 latent personas. Figure 3 visualizes this data by focusing on a single movie, The Dark Knight (2008); the movie’s protagonist, Batman, belongs to the same latent persona as Detective Jim Gordon, as well as other action movie protagonists Jason Bourne and Tony Stark (Iron Man). The movie’s antagonist, The Joker, belongs to the same latent persona as Dracula from Van Helsing and Colin Sullivan from The Departed, illustrating the ability of personas to be informed by, but still cut across, different genres. Table 3 presents an exhaustive list of all 50 topics, along with an assigned label that consists of the single word with the highest PMI for that class. Of note are topics relating to romance (unite, marry, woo, elope, court), commercial transactions (purchase, sign, sell, owe, buy), and the classic criminal schema from Chambers (2011) (sentence, arrest, assign, convict, promote). Table 4 presents the most frequent 14 personas in our dataset, illustrated with characters from the 500 highest grossing movies. The personas learned are each three separate mixtures of the 50 latent topics (one for agent relations, one for patient relations, and one for attributes), as illustrated in figure 1 above. Rather than presenting a 3 × 50 histogram for each persona, we illustrate them by listing the most characteristic topics, movie characters, and metadata features associated with it. Characteristic actions and features are defined as those having the highest smoothed pointwise mutual information with that class; exemplary characters are those with the highest posterior probability of being drawn from that class. Among the personas learned are canonical male action heroes (exemplified by the protagonists of The Bourne Supremacy, Speed, and Taken), superheroes (Hulk, Batman and Robin, Hector of Troy) and several romantic comedy types, largely characterized by words drawn from the FLIRT topic, including flirt, reconcile, date, dance and forgive. 358 Label Most characteristic words Label Most characteristic words UNITE unite marry woo elope court SWITCH switch confirm escort report instruct PURCHASE purchase sign sell owe buy INFATUATE infatuate obsess acquaint revolve concern SHOOT shoot aim overpower interrogate kill ALIEN alien child governor bandit priest EXPLORE explore investigate uncover deduce CAPTURE capture corner transport imprison trap WOMAN woman friend wife sister husband MAYA maya monster monk goon dragon WITCH witch villager kid boy mom INHERIT inherit live imagine experience share INVADE invade sail travel land explore TESTIFY testify rebuff confess admit deny DEFEAT defeat destroy transform battle inject APPLY apply struggle earn graduate develop CHASE chase scare hit punch eat EXPEL expel inspire humiliate bully grant TALK talk tell reassure assure calm DIG dig take welcome sink revolve POP pop lift crawl laugh shake COMMAND command abduct invade seize surrender SING sing perform cast produce dance RELENT relent refuse agree insist hope APPROVE approve die suffer forbid collapse EMBARK embark befriend enlist recall meet WEREWOLF werewolf mother parent killer father MANIPULATE manipulate conclude investigate conduct DINER diner grandfather brother terrorist ELOPE elope forget succumb pretend like DECAPITATE decapitate bite impale strangle stalk FLEE flee escape swim hide manage REPLY reply say mention answer shout BABY baby sheriff vampire knight spirit DEMON demon narrator mayor duck crime BIND bind select belong refer represent CONGRATULATE congratulate cheer thank recommend REJOIN rejoin fly recruit include disguise INTRODUCE introduce bring mock read hatch DARK dark major henchman warrior sergeant HATCH hatch don exist vow undergo SENTENCE sentence arrest assign convict promote FLIRT flirt reconcile date dance forgive DISTURB disturb frighten confuse tease scare ADOPT adopt raise bear punish feed RIP rip vanish crawl drive smash FAIRY fairy kidnapper soul slave president INFILTRATE infiltrate deduce leap evade obtain BUG bug zombie warden king princess SCREAM scream faint wake clean hear Table 3: Latent topics learned for K = 50 and P = 100. The words shown for each class are those with the highest smoothed PMI, with the label being the single word with the highest PMI. Freq Actions Characters Features 0.109 DARKm, SHOOTa, SHOOTp Jason Bourne (The Bourne Supremacy), Jack Traven (Speed), Jean-Claude (Taken) Action, Male, War film 0.079 CAPTUREp, INFILTRATEa, FLEEa Aang (The Last Airbender), Carly (Transformers: Dark of the Moon), Susan Murphy/Ginormica (Monsters vs. Aliens) Female, Action, Adventure 0.067 DEFEATa, DEFEATp, INFILTRATEa Glenn Talbot (Hulk), Batman (Batman and Robin), Hector (Troy) Action, Animation, Adventure 0.060 COMMANDa, DEFEATp, CAPTUREp Zoe Neville (I Am Legend), Ursula (The Little Mermaid), Joker (Batman) Action, Adventure, Male 0.046 INFILTRATEa, EXPLOREa, EMBARKa Peter Parker (Spider-Man 3), Ethan Hunt (Mission: Impossible), Jason Bourne (The Bourne Ultimatum) Male, Action, Age 34-36 0.036 FLIRTa, FLIRTp, TESTIFYa Mark Darcy (Bridget Jones: The Edge of Reason), Jerry Maguire (Jerry Maguire), Donna (Mamma Mia!) Female, Romance Film, Comedy 0.033 EMBARKa, INFILTRATEa, INVADEa Perseus (Wrath of the Titans), Maximus Decimus Meridius (Gladiator), Julius (Twins) Male, Chinese Movies, Spy 0.027 CONGRATULATEa, CONGRATULATEp, SWITCHa Professor Albus Dumbledore (Harry Potter and the Philosopher’s Stone), Magic Mirror (Shrek), Josephine Anwhistle (Lemony Snicket’s A Series of Unfortunate Events) Age 58+, Family Film, Age 51-57 0.025 SWITCHa, SWITCHp, MANIPULATEa Clarice Starling (The Silence of the Lambs), Hannibal Lecter (The Silence of the Lambs), Colonel Bagley (The Last Samurai) Age 58+, Male, Age 45-50 0.022 REPLYa, TALKp, FLIRTp Graham (The Holiday), Abby Richter (The Ugly Truth), Anna Scott (Notting Hill) Female, Comedy, Romance Film 0.020 EXPLOREa, EMBARKa, CAPTUREp Harry Potter (Harry Potter and the Philosopher’s Stone), Harry Potter (Harry Potter and the Chamber of Secrets), Captain Leo Davidson (Planet of the Apes) Adventure, Family Film, Horror 0.018 FAIRYm, COMMANDa, CAPTUREp Captain Jack Sparrow (Pirates of the Caribbean: At World’s End), Shrek (Shrek), Shrek (Shrek Forever After) Action, Family Film, Animation 0.018 DECAPITATEa, DECAPITATEp, RIPa Jericho Cane (End of Days), Martin Riggs (Lethal Weapon 2), Gabriel Van Helsing (Van Helsing) Horror, Slasher, Teen 0.017 APPLYa, EXPELp, PURCHASEp Oscar (Shark Tale), Elizabeth Halsey (Bad Teacher), Dre Parker (The Karate Kid) Female, Teen, Under Age 22 Table 4: Of 100 latent personas learned, we present the top 14 by frequency. Actions index the latent topic classes presented in table 3; subscripts denote whether the character is predominantly the agent (a), patient (p) or is modified by an attribute (m). 359 7 Conclusion We present a method for automatically inferring latent character personas from text (and metadata, when available). While our testbed has been textual synopses of film, this approach is easily extended to other genres (such as novelistic fiction) and to non-fictional domains as well, where the choice of portraying a real-life person as embodying a particular kind of persona may, for instance, give insight into questions of media framing and bias in newswire; self-presentation of individual personas likewise has a long history in communication theory (Goffman, 1959) and may be useful for inferring user types for personalization systems (El-Arini et al., 2012). While the goal of this work has been to induce a set of latent character classes and partition all characters among them, one interesting question that remains is how a specific character’s actions may informatively be at odds with their inferred persona, given the choice of that persona as the single best fit to explain the actions we observe. By examining how any individual character deviates from the behavior indicative of their type, we might be able to paint a more nuanced picture of how a character can embody a specific persona while resisting it at the same time. Acknowledgments We thank Megan Morrison at the CMU School of Drama for early conversations guiding our work, as well as the anonymous reviewers for helpful comments. The research reported in this article was supported by U.S. National Science Foundation grant IIS-0915187 and by an ARCS scholarship to D.B. This work was made possible through the use of computing resources made available by the Pittsburgh Supercomputing Center. References Aristotle. 335 BCE. Poetics, translated by Samuel H. Butcher (1902). Macmillan, London. David M. Blei, Andrew Ng, and Michael Jordan. 2003. Latent dirichlet allocation. JMLR, 3:993–1022. Joseph Campbell. 1949. The Hero with a Thousand Faces. Pantheon Books. Nathanael Chambers and Dan Jurafsky. 2008. Unsupervised learning of narrative event chains. In Proceedings of ACL-08: HLT. Nathanael Chambers and Dan Jurafsky. 2009. Unsupervised learning of narrative schemas and their participants. In Proceedings of the 47th Annual Meeting of the ACL. Nathanael Chambers. 2011. Inducing Event Schemas and their Participants from Unlabeled Text. Ph.D. thesis, Stanford University. Marie-Catherine de Marneffe and Christopher D. Manning. 2008. Stanford typed dependencies manual. Technical report, Stanford University. Lajos Egri. 1946. The Art of Dramatic Writing. Simon and Schuster, New York. Khalid El-Arini, Ulrich Paquet, Ralf Herbrich, Jurgen Van Gael, and Blaise Ag¨uera y Arcas. 2012. Transparent user models for personalization. In Proceedings of the 18th ACM SIGKDD. Micha Elsner. 2012. Character-based kernels for novelistic plot structure. In Proceedings of the 13th Conference of the EACL. Mark Alan Finlayson. 2011. Learning Narrative Structure from Annotated Folktales. Ph.D. thesis, MIT. R. A. Fisher. 1935. The Design of Experiments. Oliver and Boyde, Edinburgh and London. Erving Goffman. 1959. The Presentation of the Self in Everyday Life. Anchor. Amit Goyal, Ellen Riloff, and Hal Daum´e, III. 2010. Automatically producing plot unit representations for narrative text. In Proceedings of the 2010 Conference on EMNLP. Trond Grenager and Christopher D. Manning. 2006. Unsupervised discovery of a statistical verb lexicon. In Proceedings of the 2006 Conference on EMNLP. Thomas L. Griffiths and Mark Steyvers. 2004. Finding scientific topics. PNAS, 101(suppl. 1):5228–5235. Carl Jung. 1981. The Archetypes and The Collective Unconscious, volume 9 of Collected Works. Bollingen, Princeton, NJ, 2nd edition. Neil McIntyre and Mirella Lapata. 2010. Plot induction and evolutionary search for story generation. In Proceedings of the 48th Annual Meeting of the ACL. Association for Computational Linguistics. Robert McKee. 1997. Story: Substance, Structure, Style and the Principles of Screenwriting. HarperColllins. Marina Meil˘a. 2007. Comparing clusterings—an information based distance. Journal of Multivariate Analysis, 98(5):873–895. David Mimno and Andrew McCallum. 2008. Topic models conditioned on arbitrary features with dirichlet-multinomial regression. In Proceedings of UAI. 360 Fernando Pereira, Naftali Tishby, and Lillian Lee. 1993. Distributional clustering of english words. In Proceedings of the 31st Annual Meeting of the ACL. E. J. G. Pitman. 1937. Significance tests which may be applied to samples from any population. Supplement to the Journal of the Royal Statistical Society, 4(1):119–130. Michaela Regneri, Alexander Koller, and Manfred Pinkal. 2010. Learning script knowledge with web experiments. In Proceedings of the 48th Annual Meeting of the ACL. Michaela Regneri, Alexander Koller, Josef Ruppenhofer, and Manfred Pinkal. 2011. Learning script participants from unlabeled data. In Proceedings of the Conference on Recent Advances in Natural Language Processing. Roger C. Schank and Robert P. Abelson. 1977. Scripts, plans, goals, and understanding: An inquiry into human knowledge structures. Lawrence Erlbaum, Hillsdale, NJ. Ivan Titov and Alexandre Klementiev. 2012. A bayesian approach to unsupervised semantic role induction. In Proceedings of the 13th Conference of EACL. Greg C. G. Wei and Martin A. Tanner. 1990. A Monte Carlo implementation of the EM algorithm and the poor man’s data augmentation algorithms. Journal of the American Statistical Association, 85:699–704. Limin Yao, Aria Haghighi, Sebastian Riedel, and Andrew McCallum. 2011. Structured relation discovery using generative models. In Proceedings of the Conference on EMNLP. 361
2013
35
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 362–371, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Scalable Decipherment for Machine Translation via Hash Sampling Sujith Ravi Google Mountain View, CA 94043 [email protected] Abstract In this paper, we propose a new Bayesian inference method to train statistical machine translation systems using only nonparallel corpora. Following a probabilistic decipherment approach, we first introduce a new framework for decipherment training that is flexible enough to incorporate any number/type of features (besides simple bag-of-words) as side-information used for estimating translation models. In order to perform fast, efficient Bayesian inference in this framework, we then derive a hash sampling strategy that is inspired by the work of Ahmed et al. (2012). The new translation hash sampler enables us to scale elegantly to complex models (for the first time) and large vocabulary/corpora sizes. We show empirical results on the OPUS data—our method yields the best BLEU scores compared to existing approaches, while achieving significant computational speedups (several orders faster). We also report for the first time—BLEU score results for a largescale MT task using only non-parallel data (EMEA corpus). 1 Introduction Statistical machine translation (SMT) systems these days are built using large amounts of bilingual parallel corpora. The parallel corpora are used to estimate translation model parameters involving word-to-word translation tables, fertilities, distortion, phrase translations, syntactic transformations, etc. But obtaining parallel data is an expensive process and not available for all language pairs or domains. On the other hand, monolingual data (in written form) exists and is easier to obtain for many languages. Learning translation models from monolingual corpora could help address the challenges faced by modern-day MT systems, especially for low resource language pairs. Recently, this topic has been receiving increasing attention from researchers and new methods have been proposed to train statistical machine translation models using only monolingual data in the source and target language. The underlying motivation behind most of these methods is that statistical properties for linguistic elements are shared across different languages and some of these similarities (mappings) could be automatically identified from large amounts of monolingual data. The MT literature does cover some prior work on extracting or augmenting partial lexicons using non-parallel corpora (Rapp, 1995; Fung and McKeown, 1997; Koehn and Knight, 2000; Haghighi et al., 2008). However, none of these methods attempt to train end-to-end MT models, instead they focus on mining bilingual lexicons from monolingual corpora and often they require parallel seed lexicons as a starting point. Some of them (Haghighi et al., 2008) also rely on additional linguistic knowledge such as orthography, etc. to mine word translation pairs across related languages (e.g., Spanish/English). Unsupervised training methods have also been proposed in the past for related problems in decipherment (Knight and Yamada, 1999; Snyder et al., 2010; Ravi and Knight, 2011a) where the goal is to decode unknown scripts or ciphers. The body of work that is more closely related to ours include that of Ravi and Knight (2011b) who introduced a decipherment approach for training translation models using only monolingual cor362 pora. Their best performing method uses an EM algorithm to train a word translation model and they show results on a Spanish/English task. Nuhn et al. (2012) extend the former approach and improve training efficiency by pruning translation candidates prior to EM training with the help of context similarities computed from monolingual corpora. In this work we propose a new Bayesian inference method for estimating translation models from scratch using only monolingual corpora. Secondly, we introduce a new feature-based representation for sampling translation candidates that allows one to incorporate any amount of additional features (beyond simple bag-of-words) as sideinformation during decipherment training. Finally, we also derive a new accelerated sampling mechanism using locality sensitive hashing inspired by recent work on fast, probabilistic inference for unsupervised clustering (Ahmed et al., 2012). The new sampler allows us to perform fast, efficient inference with more complex translation models (than previously used) and scale better to large vocabulary and corpora sizes compared to existing methods as evidenced by our experimental results on two different corpora. 2 Decipherment Model for Machine Translation We now describe the decipherment problem formulation for machine translation. Problem Formulation: Given a source text f (i.e., source word sequences f1...fm) and a monolingual target language corpus, our goal is to decipher the source text and produce a target translation. Contrary to standard machine translation training scenarios, here we have to estimate the translation model Pθ(f|e) parameters using only monolingual data. During decipherment training, our objective is to estimate the model parameters in order to maximize the probability of the source text f as suggested by Ravi and Knight (2011b). arg max θ Y f X e P(e) · Pθ(f|e) (1) For P(e), we use a word n-gram language model (LM) trained on monolingual target text. We then estimate the parameters of the translation model Pθ(f|e) during training. Translation Model: Machine translation is a much more complex task than solving other decipherment tasks such as word substitution ciphers (Ravi and Knight, 2011b; Dou and Knight, 2012). The mappings between languages involve non-determinism (i.e., words can have multiple translations), re-ordering of words can occur as grammar and syntax varies with language, and in addition word insertion and deletion operations are also involved. Ideally, for the translation model P(f|e) we would like to use well-known statistical models such as IBM Model 3 and estimate its parameters θ using the EM algorithm (Dempster et al., 1977). But training becomes intractable with complex translation models and scalability is also an issue when large corpora sizes are involved and the translation tables become huge to fit in memory. So, instead we use a simplified generative process for the translation model as proposed by Ravi and Knight (2011b) and used by others (Nuhn et al., 2012) for this task: 1. Generate a target (e.g., English) string e = e1...el, with probability P(e) according to an n-gram language model. 2. Insert a NULL word at any position in the English string, with uniform probability. 3. For each target word token ei (including NULLs), choose a source word translation fi, with probability Pθ(fi|ei). The source word may be NULL. 4. Swap any pair of adjacent source words fi−1, fi, with probability P(swap); set to 0.1. 5. Output the foreign string f = f1...fm, skipping over NULLs. Previous approaches (Ravi and Knight, 2011b; Nuhn et al., 2012) use the EM algorithm to estimate all the parameters θ in order to maximize likelihood of the foreign corpus. Instead, we propose a new Bayesian inference framework to estimate the translation model parameters. In spite of using Bayesian inference which is typically slow in practice (with standard Gibbs sampling), we show later that our method is scalable and permits decipherment training using more complex translation models (with several additional parameters). 363 2.1 Adding Phrases, Flexible Reordering and Fertility to Translation Model We now extend the generative process (described earlier) to more complex translation models. Non-local Re-ordering: The generative process described earlier limits re-ordering to local or adjacent word pairs in a source sentence. We extend this to allow re-ordering between any pair of words in the sentence. Fertility: We also add a fertility model Pθfert to the translation model using the formula: Pθfert = Y i nθ(φi|ei) · pφ0 1 (2) nθ(φi|ei) = αfert · P0(φi|ei) + C−i(ei, φi) αfert + C−i(ei) (3) where, P0 represents the base distribution (which is set to uniform) in a Chinese Restaurant Process (CRP)1 for the fertility model and C−i represents the count of events occurring in the history excluding the observation at position i. φi is the number of source words aligned to (i.e., generated by) the target word ei. We use sparse Dirichlet priors for all the translation model components.2 φ0 represents the target NULL word fertility and p1 is the insertion probability which is fixed to 0.1. In addition, we set a maximum threshold for fertility values φi ≤γ · m, where m is the length of the source sentence. This discourages a particular target word (e.g., NULL word) from generating too many source words in the same sentence. In our experiments, we set γ = 0.3. We enforce this constraint in the training process during sampling.3 Modeling Phrases: Finally, we extend the translation candidate set in Pθ(fi|ei) to model phrases in addition to words for the target side (i.e., ei can now be a word or a phrase4 previously seen in the monolingual target corpus). This greatly increases the training time since in each sampling step, we now have many more ei candidates to choose from. In Section 4, we describe how we deal 1Each component in the translation model (word/phrase translations Pθ(fi|ei), fertility Pθfert, etc.) is modeled using a CRP formulation. 2i.e., All the concentration parameters are set to low values; αf|e = αfert = 0.01. 3We only apply this constraint when training on source text/corpora made of long sentences (>10 words) where the sampler might converge very slowly. For short sentences, a sparse prior on fertility αfert typically discourages a target word from being aligned to too many different source words. 4Phrase size is limited to two words in our experiments. with this problem by using a fast, efficient sampler based on hashing that allows us to speed up the Bayesian inference significantly whereas standard Gibbs sampling would be extremely slow. 3 Feature-based representation for Source and Target The model described in the previous section while being flexible in describing the translation process, poses several challenges for training. As the source and target vocabulary sizes increase the size of the translation table (|Vf| · |Ve|) increases significantly and often becomes too huge to fit in memory. Additionally, performing Bayesian inference with such a complex model using standard Gibbs sampling can be very slow in practice. Here, we describe a new method for doing Bayesian inference by first introducing a featurebased representation for the source and target words (or phrases) from which we then derive a novel proposal distribution for sampling translation candidates. We represent both source and target words in a vector space similar to how documents are represented in typical information retrieval settings. But unlike documents, here each word w is associated with a feature vector w1...wd (where wi represents the weight for the feature indexed by i) which is constructed from monolingual corpora. For instance, context features for word w may include other words (or phrases) that appear in the immediate context (n-gram window) surrounding w in the monolingual corpus. Similarly, we can add other features based on topic models, orthography (Haghighi et al., 2008), temporal (Klementiev et al., 2012), etc. to our representation all of which can be extracted from monolingual corpora. Next, given two high dimensional vectors u and v it is possible to calculate the similarity between the two words denoted by s(u, v). The feature construction process is described in more detail below: Target Language: We represent each word (or phrase) ei with the following contextual features along with their counts: (a) f−context: every (word n-gram, position) pair immediately preceding ei in the monolingual corpus (n=1, position=−1), (b) similar features f+context to model the context following ei, and (c) we also throw in generic context features fscontext without position information— every word that co-occurs with ei in the same sen364 tence. While the two position-features provide specific context information (may be sparse for large monolingual corpora), this feature is more generic and captures long-distance co-occurrence statistics. Source Language: Words appearing in a source sentence f are represented using the corresponding target translation e = e1...em generated for f in the current sample during training. For each source word fj ∈f, we look at the corresponding word ej in the target translation. We then extract all the context features of ej in the target translation sample sentence e and add these features (f−context, f+context, fscontext) with weights to the feature representation for fj. Unlike the target word feature vectors (which can be pre-computed from the monolingual target corpus), the feature vector for every source word fj is dynamically constructed from the target translation sampled in each training iteration. This is a key distinction of our framework compared to previous approaches that use contextual similarity (or any other) features constructed from static monolingual corpora (Rapp, 1995; Koehn and Knight, 2000; Nuhn et al., 2012). Note that as we add more and more features for a particular word (by training on larger monolingual corpora or adding new types of features, etc.), it results in the feature representation becoming more sparse (especially for source feature vectors) which can cause problems in efficiency as well as robustness when computing similarity against other vectors. In the next section, we will describe how we mitigate this problem by projecting into a low-dimensional space by computing hash signatures. In all our experiments, we only use the features described above for representing source and target words. We note that the new sampling framework is easily extensible to many additional feature types (for example, monolingual topic model features, etc.) which can be efficiently handled by our inference algorithm and could further improve translation performance but we leave this for future work. 4 Bayesian MT Decipherment via Hash Sampling The next step is to use the feature representations described earlier and iteratively sample a target word (or phrase) translation candidate ei for every word fi in the source text f. This involves choosing from |Ve| possible target candidates in every step which can be highly inefficient (and infeasible for large vocabulary sizes). One possible strategy is to compute similarity scores s(wfi, we′) between the current source word feature vector wfi and feature vectors we′∈Ve for all possible candidates in the target vocabulary. Following this, we can prune the translation candidate set by keeping only the top candidates e∗according to the similarity scores. Nuhn et al. (2012) use a similar strategy to obtain a more compact translation table that improves runtime efficiency for EM training. Their approach requires calculating and sorting all |Ve|·|Vf| distances in time O(V 2 ·log(V )), where V = max(|Ve|, |Vf|). Challenges: Unfortunately, there are several additional challenges which makes inference very hard in our case. Firstly, we would like to include as many features as possible to represent the source/target words in our framework besides simple bag-of-words context similarity (for example, left-context, right-context, and other generalpurpose features based on topic models, etc.). This makes the complexity far worse (in practice) since the dimensionality of the feature vectors d is a much higher value than |Ve|. Computing similarity scores alone (na¨ıvely) would incur O(|Ve| · d) time which is prohibitively huge since we have to do this for every token in the source language corpus. Secondly, for Bayesian inference we need to sample from a distribution that involves computing probabilities for all the components (language model, translation model, fertility, etc.) described in Equation 1. This distribution needs to be computed for every source word token fi in the corpus, for all possible candidates ei ∈Ve and the process has to be repeated for multiple sampling iterations (typically more than 1000). Doing standard collapsed Gibbs sampling in this scenario would be very slow and intractable. We now present an alternative fast, efficient inference strategy that overcomes many of the challenges described above and helps accelerate the sampling process significantly. First, we set our translation models within the context of a more generic and widely known family of distributions—mixtures of exponential families. Then we derive a novel proposal distribution for sampling translation candidates and introduce a new sampler for decipherment training that 365 is based on locality sensitive hashing (LSH). Hashing methods such as LSH have been widely used in the past in several scenarios including NLP applications (Ravichandran et al., 2005). Most of these approaches employ LSH within heuristic methods for speeding up nearestneighbor look up and similarity computation techniques. However, we use LSH hashing within a probabilistic framework which is very different from the typical use of LSH. Our work is inspired by some recent work by Ahmed et al. (2012) on speeding up Bayesian inference for unsupervised clustering. We use a similar technique as theirs but a different approximate distribution for the proposal, one that is bettersuited for machine translation models and without some of the additional overhead required for computing certain terms in the original formulation. Mixtures of Exponential Families: The translation models described earlier (Section 2) can be represented as mixtures of exponential families, specifically mixtures of multinomials. In exponential families, distributions over random variables are given by: p(x; θ) = exp(⟨φ(x), θ⟩) −g(θ) (4) where, φ : X →F is a map from x to the space of sufficient statistics and θ ∈F. The term g(θ) ensures that p(x; θ) is properly normalized. X is the domain of observations X = x1, ..., xm drawn from some distribution p. Our goal is to estimate p. In our case, this refers to the translation model from Equation 1. We also choose corresponding conjugate Dirichlet distributions for priors which have the property that the posterior distribution p(θ|X) over θ remains in the same family as p(θ). Note that the (translation) model in our case consists of multiple exponential families components—a multinomial pertaining to the language model (which remains fixed5), and other components pertaining to translation probabilities Pθ(fi|ei), fertility Pθfert, etc. To do collapsed Gibbs sampling under this model, we would perform the following steps during sampling: 1. For a given source word token fi draw target 5A high value for the LM concentration parameter α ensures that the LM probabilities do not deviate too far from the original fixed base distribution during sampling. translation ei ∼ p(ei|F, E−i) ∝p(e) · p(fi|ei, F −i, E−i) · pfert(·|ei, F −i, E−i) · ... (5) where, F is the full source text and E the full target translation generated during sampling. 2. Update the sufficient statistics for the changed target translation assignments. For large target vocabularies, computing p(fi|ei, F −i, E−i) dominates the inference procedure. We can accelerate this step significantly using a good proposal distribution via hashing. Locality Sensitive Hash Sampling: For general exponential families, here is a Taylor approximation for the data likelihood term (Ahmed et al., 2012): p(x|·) ≈exp(⟨φ(x), θ∗⟩) −g(θ∗) (6) where, θ∗is the expected parameter (sufficient statistics). For sampling the translation model, this involves computing an expensive inner product ⟨φ(fi), θ∗ e′⟩ for each source word fi which has to be repeated for every translation candidate e′, including candidates that have very low probabilities and are unlikely to be chosen as the translation for fj. So, during decipherment training a standard collapsed Gibbs sampler will waste most of its time on expensive computations that will be discarded in the end anyways. Also, unlike some standard generative models used in other unsupervised learning scenarios (e.g., clustering) that model only observed features (namely words appearing in the document), here we would like to enrich the translation model with a lot more features (side-information). Instead, we can accelerate the computation of the inner product ⟨φ(fi), θ∗ e′⟩using a hash sampling strategy similar to (Ahmed et al., 2012). The underlying idea here is to use binary hashing (Charikar, 2002) to explore only those candidates e′ that are sufficiently close to the best matching translation via a proposal distribution. Next, we briefly introduce some notations and existing theoretical results related to binary hashing before describing the hash sampling procedure. For any two vectors u, v ∈Rn, ⟨u, v⟩= ∥u∥· ∥v∥· cos ∡(u, v) (7) 366 ∡(u, v) = πPr{sgn[⟨u, w⟩] ̸= sgn[⟨v, w⟩]} (8) where, w is a random vector drawn from a symmetric spherical distribution and the term inside Pr{·} represents the relation between the signs of the two inner products. Let hl(v) ∈{0, 1}l be an l-bit binary hash of v where: [hl(v)]i := sgn[⟨v, wi⟩]; wi ∼Um. Then the probability of matching signs is given by: zl(u, v) := 1 l ∥h(u) −h(v)∥1 (9) So, zl(u, v) measures how many bits differ between the hash vectors h(u) and h(v) associated with u, v. Combining this with Equations 6 and 7 we can estimate the unnormalized log-likelihood of a source word fi being translated as target e′ via: sl(fi, e′) ∝∥θe′∥· ∥φ(fi)∥· cos πzl(φ(fi), θe′) (10) For each source word fi, we now sample from this new distribution (after normalization) instead of the original one. The binary hash representation for the two vectors yield significant speedups during sampling since Hamming distance computation between h(u) and h(v) is highly optimized on modern CPUs. Hence, we can compute an estimate for the inner product quite efficiently.6 Updating the hash signatures: During training, we compute the target candidate projection h(θe′) and corresponding norm only once7 which is different from the setup of Ahmed et al. (2012). The source word projection φ(fi) is dynamically updated in every sampling step. Note that doing this na¨ıvely would scale slowly as O(Dl) where D is the total number of features but instead we can update the hash signatures in a more efficient manner that scales as O(Di>0l) where Di>0 is the number of non-zero entries in the feature representation for the source word φ(fi). Also, we do not need to store the random vectors w in practice since these can be computed on the fly using hash functions. The inner product approximation also yields some theoretical guarantees for the hash sampler.8 6We set l = 32 bits in our experiments. 7In practice, we can ignore the norm terms to further speed up sampling since this is only an estimate for the proposal distribution and we follow this with the Metropolis Hastings step. 8For further details, please refer to (Ahmed et al., 2012). 4.1 Metropolis Hastings In each sampling step, we use the distribution from Equation 10 as a proposal distribution in a Metropolis Hastings scheme to sample target translations for each source word. Once a new target translation e′ is sampled for source word fi from the proposal distribution q(·) ∝expsl(fi,e′), we accept the proposal (and update the corresponding hash signatures) according to the probability r r = q(eold i ) · pnew(·) q(enew i ) · pold(·) (11) where, pold(·), pnew(·) are the true conditional likelihood probabilities according to our model (including the language model component) for the old, new sample respectively. 5 Training Algorithm Putting together all the pieces described in the previous section, we perform the following steps: 1. Initialization: We initialize the starting sample as follows: for each source word token, randomly sample a target word. If the source word also exists in the target vocabulary, then choose identity translation instead of the random one.9 2. Hash Sampling Steps: For each source word token fi, run the hash sampler: (a) Generate a proposal distribution by computing the hamming distance between the feature vectors for the source word and each target translation candidate. Sample a new target translation ei for fi from this distribution. (b) Compute the acceptance probability for the chosen translation using a Metropolis Hastings scheme and accept (or reject) the sample. In practice, computation of the acceptance probability only needs to be done every r iterations (where r can be anywhere from 5 or 100). Iterate through steps (2a) and (2b) for every word in the source text and then repeat this process for multiple iterations (usually 1000). 3. Other Sampling Operators: After every k iterations,10 perform the following sampling operations: (a) Re-ordering: For each source word token fi at position i, randomly choose another position j 9Initializing with identity translation rather than random choice helps in some cases, especially for unknown words that involve named entities, etc. 10We set k = 3 in our experiments. 367 Corpus Language Sent. Words Vocab. OPUS Spanish 13,181 39,185 562 English 19,770 61,835 411 EMEA French 550,000 8,566,321 41,733 Spanish 550,000 7,245,672 67,446 Table 1: Statistics of non-parallel corpora used here. in the source sentence and swap the translations ei with ej. During the sampling process, we compute the probabilities for the two samples—the original and the swapped versions, and then sample an alignment from this distribution. (b) Deletion: For each source word token, delete the current target translation (i.e., align it with the target NULL token). As with the reordering operation, we sample from a distribution consisting of the original and the deleted versions. 4. Decoding the foreign sentence: Finally, once the training is done (i.e., after all sampling iterations) we choose the final sample as our target translation output for the source text. 6 Experiments and Results We test our method on two different corpora. To evaluate translation quality, we use BLEU score (Papineni et al., 2002), a standard evaluation measure used in machine translation. First, we present MT results on non-parallel Spanish/English data from the OPUS corpus (Tiedemann, 2009) which was used by Ravi and Knight (2011b) and Nuhn et al. (2012). We show that our method achieves the best performance (BLEU scores) on this task while being significantly faster than both the previous approaches. We then apply our method to a much larger non-parallel French/Spanish corpus constructed from the EMEA corpus (Tiedemann, 2009). Here the vocabulary sizes are much larger and we show how our new Bayesian decipherment method scales well to this task inspite of using complex translation models. We also report the first BLEU results on such a large-scale MT task under truly non-parallel settings (without using any parallel data or seed lexicon). For both the MT tasks, we also report BLEU scores for a baseline system using identity translations for common words (words appearing in both source/target vocabularies) and random translations for other words. 6.1 MT Task and Data OPUS movie subtitle corpus (Tiedemann, 2009): This is a large open source collection of parallel corpora available for multiple language pairs. We use the same non-parallel Spanish/English corpus used in previous works (Ravi and Knight, 2011b; Nuhn et al., 2012). The details of the corpus are listed in Table 1. We use the entire Spanish source text for decipherment training and evaluate the final English output to report BLEU scores. EMEA corpus (Tiedemann, 2009): This is a parallel corpus made out of PDF documents (articles from the medical domain) from the European Medicines Agency. We reserve the first 1k sentences in French as our source text (also used in decipherment training). To construct a nonparallel corpus, we split the remaining 1.1M lines as follows: first 550k sentences in French, last 550k sentences in Spanish. The latter is used to construct a target language model used for decipherment training. The corpus statistics are shown in Table 1. 6.2 Results OPUS: We compare the MT results (BLEU scores) from different systems on the OPUS corpus in Table 2. The first row displays baseline performance. The next three rows 1a–1c display performance achieved by two methods from Ravi and Knight (2011b). Rows 2a, 2b show results from the of Nuhn et al. (2012). The last two rows display results for the new method using Bayesian hash sampling. Overall, using a 3-gram language model (instead of 2-gram) for decipherment training improves the performance for all methods. We observe that our method produces much better results than the others even with a 2-gram LM. With a 3-gram LM, the new method achieves the best performance; the highest BLEU score reported on this task. It is also interesting to note that the hash sampling method yields much better results than the Bayesian inference method presented in (Ravi and Knight, 2011b). This is due to the accelerated sampling scheme introduced earlier which helps it converge to better solutions faster. Table 2 (last column) also compares the efficiency of different methods in terms of CPU time required for training. Both our 2-gram and 3-gram based methods are significantly faster than those previously reported for EM based training methods presented in (Ravi and Knight, 2011b; Nuhn 368 Method BLEU Time (hours) Baseline system (identity translations) 6.9 1a. EM with 2-gram LM (Ravi and Knight, 2011b) 15.3 ∼850h 1b. EM with whole-segment LM (Ravi and Knight, 2011b) 19.3 1c. Bayesian IBM Model 3 with 2-gram LM (Ravi and Knight, 2011b) 15.1 2a. EM+Context with 2-gram LM (Nuhn et al., 2012) 15.2 50h 2b. EM+Context with 3-gram LM (Nuhn et al., 2012) 20.9 200h 3. Bayesian (standard) Gibbs sampling with 2-gram LM 222h 4a. Bayesian Hash Sampling∗with 2-gram LM (this work) 20.3 2.6h 4b. Bayesian Hash Sampling∗with 3-gram LM (this work) 21.2 2.7h (∗sampler was run for 1000 iterations) Table 2: Comparison of MT performance (BLEU scores) and efficiency (running time in CPU hours) on the Spanish/English OPUS corpus using only non-parallel corpora for training. For the Bayesian methods 4a and 4b, the samplers were run for 1000 iterations each on a single machine (1.8GHz Intel processor). For 1a, 2a, 2b, we list the training times as reported by Nuhn et al. (2012) based on their EM implementation for different settings. Method BLEU Baseline system (identity translations) 3.0 Bayesian Hash Sampling with 2-gram LM vocab=full (Ve), add fertility=no 4.2 vocab=pruned∗, add fertility=yes 5.3 Table 3: MT results on the French/Spanish EMEA corpus using the new hash sampling method. ∗The last row displays results when we sample target translations from a pruned candidate set (most frequent 1k Spanish words + identity translation candidates) which enables the sampler to run much faster when using more complex models. et al., 2012). This is very encouraging since Nuhn et al. (2012) reported obtaining a speedup by pruning translation candidates (to ∼1/8th the original size) prior to EM training. On the other hand, we sample from the full set of translation candidates including additional target phrase (of size 2) candidates which results in a much larger vocabulary consisting of 1600 candidates (∼4 times the original size), yet our method runs much faster and yields better results. The table also demonstrates the siginificant speedup achieved by the hash sampler over a standard Gibbs sampler for the same model (∼85 times faster when using a 2-gram LM). We also compare the results against MT performance from parallel training—MOSES system (Koehn et al., 2007) trained on 20k sentence pairs. The comparable number for Table 2 is 63.6 BLEU. Spanish (e) French (f) el → les la → la por → des secci´on → rubrique administraci´on → administration Table 4: Sample (1-best) Spanish/French translations produced by the new method on the EMEA corpus using word translation models trained with non-parallel corpora. EMEA Results Table 3 shows the results achieved by our method on the larger task involving EMEA corpus. Here, the target vocabulary Ve is much higher (67k). In spite of this challenge and the model complexity, we can still perform decipherment training using Bayesian inference. We report the first BLEU score results on such a large-scale task using a 2-gram LM. This is achieved without using any seed lexicon or parallel corpora. The results are encouraging and demonstrates the ability of the method to scale to large-scale settings while performing efficient inference with complex models, which we believe will be especially useful for future MT application in scenarios where parallel data is hard to obtain. Table 4 displays some sample 1-best translations learned using this method. For comparison purposes, we also evaluate MT performance on this task using parallel training (MOSES trained with hundred sentence pairs) and observe a BLEU score of 11.7. 369 7 Discussion and Future Work There exists some work (Dou and Knight, 2012; Klementiev et al., 2012) that uses monolingual corpora to induce phrase tables, etc. These when combined with standard MT systems such as Moses (Koehn et al., 2007) trained on parallel corpora, have been shown to yield some BLEU score improvements. Nuhn et al. (2012) show some sample English/French lexicon entries learnt using EM algorithm with a pruned translation candidate set on a portion of the Gigaword corpus11 but do not report any actual MT results. In addition, as we showed earlier our method can use Bayesian inference (which has a lot of nice properties compared to EM for unsupervised natural language tasks (Johnson, 2007; Goldwater and Griffiths, 2007)) and still scale easily to large vocabulary, data sizes while allowing the models to grow in complexity. Most importantly, our method produces better translation results (as demonstrated on the OPUS MT task). And to our knowledge, this is the first time that anyone has reported MT results under truly non-parallel settings on such a large-scale task (EMEA). Our method is also easily extensible to outof-domain translation scenarios similar to (Dou and Knight, 2012). While their work also uses Bayesian inference with a slice sampling scheme, our new approach uses a novel hash sampling scheme for decipherment that can easily scale to more complex models. The new decipherment framework also allows one to easily incorporate additional information (besides standard word translations) as features (e.g., context features, topic features, etc.) for unsupervised machine translation which can help further improve the performance in addition to accelerating the sampling process. We already demonstrated the utility of this system by going beyond words and incorporating phrase translations in a decipherment model for the first time. In the future, we can obtain further speedups (especially for large-scale tasks) by parallelizing the sampling scheme seamlessly across multiple machines and CPU cores. The new framework can also be stacked with complementary techniques such as slice sampling, blocked (and type) sampling to further improve inference efficiency. 11http://www.ldc.upenn.edu/Catalog/catalogEntry.jsp? catalogId=LDC2003T05 8 Conclusion To summarize, our method is significantly faster than previous methods based on EM or Bayesian with standard Gibbs sampling and obtains better results than any previously published methods for the same task. The new framework also allows performing Bayesian inference for decipherment applications with more complex models than previously shown. We believe this framework will be useful for further extending MT models in the future to improve translation performance and for many other unsupervised decipherment application scenarios. References Amr Ahmed, Sujith Ravi, Shravan Narayanamurthy, and Alex Smola. 2012. Fastex: Hash clustering with exponential families. In Proceedings of the 26th Conference on Neural Information Processing Systems (NIPS). Moses S. Charikar. 2002. Similarity estimation techniques from rounding algorithms. In Proceedings of the thiry-fourth annual ACM Symposium on Theory of Computing, pages 380–388. A. P. Dempster, N. M. Laird, and D. B. Rubin. 1977. Maximum likelihood from incomplete data via the em algorithm. Journal of the Royal Statistical Society, Series B, 39(1):1–38. Qing Dou and Kevin Knight. 2012. Large scale decipherment for out-of-domain machine translation. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 266–275. Pascale Fung and Kathleen McKeown. 1997. Finding terminology translations from non-parallel corpora. In Proceedings of the 5th Annual Workshop on Very Large Corpora, pages 192–202. Sharon Goldwater and Tom Griffiths. 2007. A fully bayesian approach to unsupervised part-of-speech tagging. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 744–751. Aria Haghighi, Percy Liang, Taylor Berg-Kirkpatrick, and Dan Klein. 2008. Learning bilingual lexicons from monolingual corpora. In Proceedings of ACL: HLT, pages 771–779. Mark Johnson. 2007. Why doesn’t EM find good HMM POS-taggers? In Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 296–305. 370 Alex Klementiev, Ann Irvine, Chris Callison-Burch, and David Yarowsky. 2012. Toward statistical machine translation without parallel corpora. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics. Kevin Knight and Kenji Yamada. 1999. A computational approach to deciphering unknown scripts. In Proceedings of the ACL Workshop on Unsupervised Learning in Natural Language Processing, pages 37–44. Philipp Koehn and Kevin Knight. 2000. Estimating word translation probabilities from unrelated monolingual corpora using the em algorithm. In Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence, pages 711–715. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondˇrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, pages 177–180. Malte Nuhn, Arne Mauser, and Hermann Ney. 2012. Deciphering foreign language by combining language models and context vectors. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, pages 156–164. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 311–318. Reinhard Rapp. 1995. Identifying word translations in non-parallel texts. In Proceedings of the 33rd Annual Meeting on Association for Computational Linguistics, pages 320–322. Sujith Ravi and Kevin Knight. 2011a. Bayesian inference for zodiac and other homophonic ciphers. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1, pages 239–247. Sujith Ravi and Kevin Knight. 2011b. Deciphering foreign language. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 12–21. Deepak Ravichandran, Patrick Pantel, and Eduard Hovy. 2005. Randomized algorithms and nlp: using locality sensitive hash function for high speed noun clustering. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 622–629. Benjamin Snyder, Regina Barzilay, and Kevin Knight. 2010. A statistical model for lost language decipherment. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1048–1057. J¨org Tiedemann. 2009. News from opus - a collection of multilingual parallel corpora with tools and interfaces. In N. Nicolov, K. Bontcheva, G. Angelova, and R. Mitkov, editors, Recent Advances in Natural Language Processing, volume V, pages 237–248. 371
2013
36
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 372–381, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Automatic Interpretation of the English Possessive Stephen Tratz † Army Research Laboratory Adelphi Laboratory Center 2800 Powder Mill Road Adelphi, MD 20783 [email protected] Eduard Hovy † Carnegie Mellon University Language Technologies Institute 5000 Forbes Avenue Pittsburgh, PA 15213 [email protected] Abstract The English ’s possessive construction occurs frequently in text and can encode several different semantic relations; however, it has received limited attention from the computational linguistics community. This paper describes the creation of a semantic relation inventory covering the use of ’s, an inter-annotator agreement study to calculate how well humans can agree on the relations, a large collection of possessives annotated according to the relations, and an accurate automatic annotation system for labeling new examples. Our 21,938 example dataset is by far the largest annotated possessives dataset we are aware of, and both our automatic classification system, which achieves 87.4% accuracy in our classification experiment, and our annotation data are publicly available. 1 Introduction The English ’s possessive construction occurs frequently in text—approximately 1.8 times for every 100 hundred words in the Penn Treebank1(Marcus et al., 1993)—and can encode a number of different semantic relations including ownership (John’s car), part-of-whole (John’s arm), extent (6 hours’ drive), and location (America’s rivers). Accurate automatic possessive interpretation could aid many natural language processing (NLP) applications, especially those that build semantic representations for text understanding, text generation, question answering, or information extraction. These interpretations could be valuable for machine translation to or from languages that allow different semantic relations to be encoded by †The authors were affiliated with the USC Information Sciences Institute at the time this work was performed. the possessive/genitive. This paper presents an inventory of 17 semantic relations expressed by the English ’s-construction, a large dataset annotated according to the this inventory, and an accurate automatic classification system. The final inter-annotator agreement study achieved a strong level of agreement, 0.78 Fleiss’ Kappa (Fleiss, 1971) and the dataset is easily the largest manually annotated dataset of possessive constructions created to date. We show that our automatic classication system is highly accurate, achieving 87.4% accuracy on a held-out test set. 2 Background Although the linguistics field has devoted significant effort to the English possessive (§6.1), the computational linguistics community has given it limited attention. By far the most notable exception to this is the line of work by Moldovan and Badulescu (Moldovan and Badulescu, 2005; Badulescu and Moldovan, 2009), who define a taxonomy of relations, annotate data, calculate interannotator agreement, and perform automatic classification experiments. Badulescu and Moldovan (2009) investigate both ’s-constructions and of constructions in the same context using a list of 36 semantic relations (including OTHER). They take their examples from a collection of 20,000 randomly selected sentences from Los Angeles Times news articles used in TREC-9. For the 960 extracted ’s-possessive examples, only 20 of their semantic relations are observed, including OTHER, with 8 of the observed relations occurring fewer than 10 times. They report a 0.82 Kappa agreement (Siegel and Castellan, 1988) for the two computational semantics graduates who annotate the data, stating that this strong result “can be explained by the instructions the annotators received 1Possessive pronouns such as his and their are treated as ’s constructions in this work. 372 prior to annotation and by their expertise in Lexical Semantics.” Moldovan and Badulescu experiment with several different classification techniques. They find that their semantic scattering technique significantly outperforms their comparison systems with its F-measure score of 78.75. Their SVM system performs the worst with only 23.25% accuracy— suprisingly low, especially considering that 220 of the 960 ’s examples have the same label. Unfortunately, Badulescu and Moldovan (2009) have not publicly released their data2. Also, it is sometimes difficult to understand the meaning of the semantic relations, partly because most relations are only described by a single example and, to a lesser extent, because the bulk of the given examples are of-constructions. For example, why President of Bolivia warrants a SOURCE/FROM relation but University of Texas is assigned to LOCATION/SPACE is unclear. Their relations and provided examples are presented below in Table 1. Relation Examples POSSESSION Mary’s book KINSHIP Mary’s brother PROPERTY John’s coldness AGENT investigation of the crew TEMPORAL last year’s exhibition DEPICTION-DEPICTED a picture of my niece PART-WHOLE the girl’s mouth CAUSE death of cancer MAKE/PRODUCE maker of computer LOCATION/SPACE Univerity of Texas SOURCE/FROM President of Bolivia TOPIC museum of art ACCOMPANIMENT solution of the problem EXPERIENCER victim of lung disease RECIPIENT Josephine’s reward ASSOCIATED WITH contractors of shipyard MEASURE hundred (sp?) of dollars THEME acquisition of the holding RESULT result of the review OTHER state of emergency Table 1: The 20 (out of an original 36) semantic relations observed by Badulescu and Moldovan (2009) along with their examples. 3 Dataset Creation We created the dataset used in this work from three different sources, each representing a distinct genre—newswire, non-fiction, and fiction. Of the 2Email requests asking for relation definitions and the data were not answered, and, thus, we are unable to provide an informative comparison with their work. 21,938 total examples, 15,330 come from sections 2–21 of the Penn Treebank (Marcus et al., 1993). Another 5,266 examples are from The History of the Decline and Fall of the Roman Empire (Gibbon, 1776), a non-fiction work, and 1,342 are from The Jungle Book (Kipling, 1894), a collection of fictional short stories. For the Penn Treebank, we extracted the examples using the provided gold standard parse trees, whereas, for the latter cases, we used the output of an open source parser (Tratz and Hovy, 2011). 4 Semantic Relation Inventory The initial semantic relation inventory for possessives was created by first examining some of the relevant literature on possessives, including work by Badulescu and Moldovan (2009), Barker (1995), Quirk et al. (1985), Rosenbach (2002), and Taylor (1996), and then manually annotating the large dataset of examples. Similar examples were grouped together to form initial categories, and groups that were considered more difficult were later reexamined in greater detail. Once all the examples were assigned to initial categories, the process of refining the definitions and annotations began. In total, 17 relations were created, not including OTHER. They are shown in Table 3 along with approximate (best guess) mappings to relations defined by others, specifically those of Quirk et al. (1985), whose relations are presented in Table 2, as well as Badulescu and Moldovan’s (2009) relations. Relation Examples POSSESSIVE my wife’s father SUBJECTIVE boy’s application OBJECTIVE the family’s support ORIGIN the general’s letter DESCRIPTIVE a women’s college MEASURE ten days’ absense ATTRIBUTE the victim’s courage PARTITIVE the baby’s eye APPOSITION (marginal) Dublin’s fair city Table 2: The semantic relations proposed by Quirk et al. (1985) for ’s along with some of their examples. 4.1 Refinement and Inter-annotator Agreement The semantic relation inventory was refined using an iterative process, with each iteration involv373 Relation Example HDFRE JB PTB Mappings SUBJECTIVE Dora’s travels 1083 89 3169 Q:SUBJECTIVE, B:AGENT PRODUCER’S PRODUCT Ford’s Taurus 47 44 1183 Q:ORIGIN, B:MAKE/PRODUCE B:RESULT OBJECTIVE Mowgli’s capture 380 7 624 Q:OBJECTIVE, B:THEME CONTROLLER/OWNER/USER my apartment 882 157 3940 QB:POSSESSIVE MENTAL EXPERIENCER Sam’s fury 277 22 232 Q:POSSESSIVE, B:EXPERIENCER RECIPIENT their bonuses 12 6 382 Q:POSSESSIVE, B:RECIPIENT MEMBER’S COLLECTION John’s family 144 31 230 QB:POSSESSIVE PARTITIVE John’s arm 253 582 451 Q:PARTITIVE, B:PART-WHOLE LOCATION Libya’s people 24 0 955 Q:POSSESSIVE, B:SOURCE/FROM B:LOCATION/SPACE TEMPORAL today’s rates 0 1 623 Q:POSSESSIVE, B:TEMPORAL EXTENT 6 hours’ drive 8 10 5 QB:MEASURE KINSHIP Mary’s kid 324 156 264 Q:POSSESSIVE, B:KINSHIP ATTRIBUTE picture’s vividness 1013 34 1017 Q:ATTRIBUTE, B:PROPERTY TIME IN STATE his years in Ohio 145 32 237 QB:POSSESSIVE POSSESSIVE COMPOUND the [men’s room] 0 0 67 Q:DESCRIPTIVE ADJECTIVE DETERMINED his fellow Brit 12 0 33 OTHER RELATIONAL NOUN his friend 629 112 1772 QB:POSSESSIVE OTHER your Lordship 33 59 146 B:OTHER Table 3: Possessive semantic relations along with examples, counts, and approximate mappings to other inventories. Q and B represent Quirk et al. (1985) and Badulescu and Moldovan (2009), respectively. HDFRE, JB, PTB: The History of the Decline and Fall of the Roman Empire, The Jungle Book, and the Penn Treebank, respectively. ing the annotation of a random set of 50 examples. Each set of examples was extracted such that no two examples had an identical possessee word. For a given example, annotators were instructed to select the most appropriate option but could also record a second-best choice to provide additional feedback. Figure 1 presents a screenshot of the HTML-based annotation interface. After the annotation was complete for a given round, agreement and entropy figures were calculated and changes were made to the relation definitions and dataset. The number of refinement rounds was arbitrarily limited to five. To measure agreement, in addition to calculating simple percentage agreement, we computed Fleiss’ Kappa (Fleiss, 1971), a measure of agreement that incorporates a correction for agreement due to chance, similar to Cohen’s Kappa (Cohen, 1960), but which can be used to measure agreement involving more than two annotations per item. The agreement and entropy figures for these five intermediate annotation rounds are given in Table 4. In all the possessive annotation tables, Annotator A refers to the primary author and the labels B and C refer to two additional annotators. To calculate a final measure of inter-annotator agreement, we randomly drew 150 examples from the dataset not used in the previous refinement iterations, with 50 examples coming from each of Figure 1: Screenshot of the HTML template page used for annotation. the three original data sources. All three annotators initially agreed on 82 of the 150 examples, leaving 68 examples with at least some disagreement, including 17 for which all three annotators disagreed. Annotators then engaged in a new task in which they re-annotated these 68 examples, in each case being able to select only from the definitions previously chosen for each example by at least one annotator. No indication of who or how many people had previously selected the definitions was 374 Figure 2: Semantic relation distribution for the dataset presented in this work. HDFRE: History of the Decline and Fall of the Roman Empire; JB: Jungle Book; PTB: Sections 2–21 of the Wall Street Journal portion of the Penn Treebank. given3. Annotators were instructed not to choose a definition simply because they thought they had chosen it before or because they thought someone else had chosen it. After the revision process, all three annotators agreed in 109 cases and all three disagreed in only 6 cases. During the revision process, Annotator A made 8 changes, B made 20 changes, and C made 33 changes. Annotator A likely made the fewest changes because he, as the primary author, spent a significant amount of time thinking about, writing, and re-writing the definitions used for the various iterations. Annotator C’s annotation work tended to be less consistent in general than Annotator B’s throughout this work as well as in a different task not discussed within this paper, which probably why Annotator C made more changes than Annotator B. Prior to this revision process, the three-way Fleiss’ Kappa score was 0.60 but, afterwards, it was at 0.78. The inter-annotator agreement and entropy figures for before and after this revision process, including pairwise scores between individual annotators, are presented in Tables 5 and 6. 4.2 Distribution of Relations The distribution of semantic relations varies somewhat by the data source. The Jungle Book’s distribution is significantly different from the oth3Of course, if three definitions were present, it could be inferred that all three annotators had initially disagreed. ers, with a much larger percentage of PARTITIVE and KINSHIP relations. The Penn Treebank and The History of the Decline and Fall of the Roman Empire were substantially more similar, although there are notable differences. For instance, the LOCATION and TEMPORAL relations almost never occur in The History of the Decline and Fall of the Roman Empire. Whether these differences are due to variations in genre, time period, and/or other factors would be an interesting topic for future study. The distribution of relations for each data source is presented in Figure 2. Though it is harder to compare across datasets using different annotation schemes, there are at least a couple notable differences between the distribution of relations for Badulescu and Moldovan’s (2009) dataset and the distribution of relations used in this work. One such difference is the much higher percentage of examples labeled as TEMPORAL—11.35% vs only 2.84% in our data. Another difference is a higher incidence of the KINSHIP relation (6.31% vs 3.39%), although it is far less frequent than it is in The Jungle Book (11.62%). 4.3 Encountered Ambiguities One of the problems with creating a list of relations expressed by ’s-constructions is that some examples can potentially fit into multiple categories. For example, Joe’s resentment encodes 375 both SUBJECTIVE relation and MENTAL EXPERIENCER relations and UK’s cities encodes both PARTITIVE and LOCATION relations. A representative list of these types of issues along with examples designed to illustrate them is presented in Table 7. 5 Experiments For the automatic classification experiments, we set aside 10% of the data for test purposes, and used the the remaining 90% for training. We used 5-fold cross-validation performed using the training data to tweak the included feature templates and optimize training parameters. 5.1 Learning Approach The LIBLINEAR (Fan et al., 2008) package was used to train linear Support Vector Machine (SVMs) for all the experiments in the one-againstthe-rest style. All training parameters took their default values with the exception of the C parameter, which controls the tradeoff between margin width and training error and which was set to 0.02, the point of highest performance in the crossvalidation tuning. 5.2 Feature Generation For feature generation, we conflated the possessive pronouns ‘his’, ‘her’, ‘my’, and ‘your’ to ‘person.’ Similarly, every term matching the case-insensitive regular expression (corp|co|plc|inc|ag|ltd|llc)\\.?) was replaced with the word ‘corporation.’ All the features used are functions of the following five words. • The possessor word • The possessee word • The syntactic governor of the possessee word • The set of words between the possessor and possessee word (e.g., first in John’s first kiss) • The word to the right of the possessee The following feature templates are used to generate features from the above words. Many of these templates utilize information from WordNet (Fellbaum, 1998). • WordNet link types (link type list) (e.g., attribute, hypernym, entailment) • Lexicographer filenames (lexnames)—top level categories used in WordNet (e.g., noun.body, verb.cognition) • Set of words from the WordNet definitions (gloss terms) • The list of words connected via WordNet part-of links (part words) • The word’s text (the word itself) • A collection of affix features (e.g., -ion, -er, -ity, -ness, -ism) • The last {2,3} letters of the word • List of all possible parts-of-speech in WordNet for the word • The part-of-speech assigned by the part-ofspeech tagger • WordNet hypernyms • WordNet synonyms • Dependent words (all words linked as children in the parse tree) • Dependency relation to the word’s syntactic governor 5.3 Results The system predicted correct labels for 1,962 of the 2,247 test examples, or 87.4%. The accuracy figures for the test instances from the Penn Treebank, The Jungle Book, and The History of the Decline and Fall of the Roman Empire were 88.8%, 84.7%, and 80.6%, respectively. The fact that the score for The Jungle Book was the lowest is somewhat surprising considering it contains a high percentage of body part and kinship terms, which tend to be straightforward, but this may be because the other sources comprise approximately 94% of the training examples. Given that human agreement typically represents an upper bound on machine performance in classification tasks, the 87.4% accuracy figure may be somewhat surprising. One explanation is that the examples pulled out for the inter-annotator agreement study each had a unique possessee word. For example, “expectations”, as in “analyst’s expectations”, occurs 26 times as the possessee in the dataset, but, for the inter-annotator agreement study, at most one of these examples could be included. More importantly, when the initial relations were being defined, the data were first sorted based upon the possessee and then the possessor in order to create blocks of similar examples. Doing this allowed multiple examples to be assigned to a category more quickly because one can decide upon a category for the whole lot at once and then just extract the few, if any, that belong to other categories. This is likely to be both faster and more consistent than examining each 376 Agreement (%) Fleiss’ κ Entropy Iteration A vs B A vs C B vs C A vs B A vs C B vs C All A B C 1 0.60 0.68 0.54 0.53 0.62 0.46 0.54 3.02 2.98 3.24 2 0.64 0.44 0.50 0.59 0.37 0.45 0.47 3.13 3.40 3.63 3 0.66 0.66 0.72 0.57 0.58 0.66 0.60 2.44 2.47 2.70 4 0.64 0.30 0.38 0.57 0.16 0.28 0.34 2.80 3.29 2.87 5 0.72 0.66 0.60 0.67 0.61 0.54 0.61 3.21 3.12 3.36 Table 4: Intermediate results for the possessives refinement work. Agreement (%) Fleiss’ κ Entropy Portion A vs B A vs C B vs C A vs B A vs C B vs C All A B C PTB 0.62 0.62 0.54 0.56 0.56 0.46 0.53 3.22 3.17 3.13 HDFRE 0.82 0.78 0.72 0.77 0.71 0.64 0.71 2.73 2.75 2.73 JB 0.74 0.56 0.54 0.70 0.50 0.48 0.56 3.17 3.11 3.17 All 0.73 0.65 0.60 0.69 0.61 0.55 0.62 3.43 3.35 3.51 Table 5: Final possessives annotation agreement figures before revisions. Agreement (%) Fleiss’ κ Entropy Source A vs B A vs C B vs C A vs B A vs C B vs C All A B C PTB 0.78 0.74 0.74 0.75 0.70 0.70 0.72 3.30 3.11 3.35 HDFRE 0.78 0.76 0.76 0.74 0.72 0.72 0.73 3.03 2.98 3.17 JB 0.92 0.90 0.86 0.90 0.87 0.82 0.86 2.73 2.71 2.65 All 0.83 0.80 0.79 0.80 0.77 0.76 0.78 3.37 3.30 3.48 Table 6: Final possessives annotation agreement figures after revisions. First Relation Second Relation Example PARTITIVE CONTROLLER/... BoA’s Mr. Davis PARTITIVE LOCATION UK’s cities PARTITIVE OBJECTIVE BoA’s adviser PARTITIVE OTHER RELATIONAL NOUN BoA’s chairman PARTITIVE PRODUCER’S PRODUCT the lamb’s wool CONTROLLER/... PRODUCER’S PRODUCT the bird’s nest CONTROLLER/... OBJECTIVE his assistant CONTROLLER/... LOCATION Libya’s oil company CONTROLLER/... ATTRIBUTE Joe’s strength CONTROLLER/... MEMBER’S COLLECTION the colonel’s unit CONTROLLER/... RECIPIENT Joe’s trophy RECIPIENT OBJECTIVE Joe’s reward SUBJECTIVE PRODUCER’S PRODUCT Joe’s announcement SUBJECTIVE OBJECTIVE its change SUBJECTIVE CONTROLLER/... Joe’s employee SUBJECTIVE LOCATION Libya’s devolution SUBJECTIVE MENTAL EXPERIENCER Joe’s resentment OBJECTIVE MENTAL EXPERIENCER Joe’s concern OBJECTIVE LOCATION the town’s inhabitants KINSHIP OTHER RELATIONAL NOUN his fiancee Table 7: Ambiguous/multiclass possessive examples. example in isolation. This advantage did not exist in the inter-annotator agreement study. 5.4 Feature Ablation Experiments To evaluate the importance of the different types of features, the same experiment was re-run multiple times, each time including or excluding exactly one feature template. Before each variation, the C parameter was retuned using 5-fold cross validation on the training data. The results for these runs are shown in Table 8. Based upon the leave-one-out and only-one feature evaluation experiment results, it appears that the possessee word is more important to classification than the possessor word. The possessor word is still valuable though, with it likely being more 377 valuable for certain categories (e.g., TEMPORAL and LOCATION) than others (e.g., KINSHIP). Hypernym and gloss term features proved to be about equally valuable. Curiously, although hypernyms are commonly used as features in NLP classification tasks, gloss terms, which are rarely used for these tasks, are approximately as useful, at least in this particular context. This would be an interesting result to examine in greater detail. 6 Related Work 6.1 Linguistics Semantic relation inventories for the English ’sconstruction have been around for some time; Taylor (1996) mentions a set of 6 relations enumerated by Poutsma (1914–1916). Curiously, there is not a single dominant semantic relation inventory for possessives. A representative example of semantic relation inventories for ’s-constructions is the one given by Quirk et al. (1985) (presented earlier in Section 2). Interestingly, the set of relations expressed by possessives varies by language. For example, Classical Greek permits a standard of comparison relation (e.g., “better than Plato”) (Nikiforidou, 1991), and, in Japanese, some relations are expressed in the opposite direction (e.g., “blue eye’s doll”) while others are not (e.g., “Tanaka’s face”) (Nishiguchi, 2009). To explain how and why such seemingly different relations as whole+part and cause+effect are expressed by the same linguistic phenomenon, Nikiforidou (1991) pursues an approach of metaphorical structuring in line with the work of Lakoff and Johnson (1980) and Lakoff (1987). She thus proposes a variety of such metaphors as THINGS THAT HAPPEN (TO US) ARE (OUR) POSSESSIONS and CAUSES ARE ORIGINS to explain how the different relations expressed by possessives extend from one another. Certainly, not all, or even most, of the linguistics literature on English possessives focuses on creating lists of semantic relations. Much of the work covering the semantics of the ’s construction in English, such as Barker’s (1995) work, dwells on the split between cases of relational nouns, such as sister, that, by their very definition, hold a specific relation to other real or conceptual things, and non-relational, or sortal nouns (Löbner, 1985), such as car. Vikner and Jensen’s (2002) approach for handling these disparate cases is based upon Pustejovsky’s (1995) generative lexicon framework. They coerce sortal nouns (e.g., car) into being relational, purporting to create a uniform analysis. They split lexical possession into four types: inherent, part-whole, agentive, and control, with agentive and control encompassing many, if not most, of the cases involving sortal nouns. A variety of other issues related to possessives considered by the linguistics literature include adjectival modifiers that significantly alter interpretation (e.g., favorite and former), double genitives (e.g., book of John’s), bare possessives (i.e., cases where the possessee is omitted, as in “Eat at Joe’s”), possessive compounds (e.g., driver’s license), the syntactic structure of possessives, definitiveness, changes over the course of history, and differences between languages in terms of which relations may be expressed by the genitive. Representative work includes that by Barker (1995), Taylor (1996), Heine (1997), Partee and Borschev (1998), Rosenbach (2002), and Vikner and Jensen (2002). 6.2 Computational Linguistics Though the relation between nominals in the English possessive construction has received little attention from the NLP community, there is a large body of work that focuses on similar problems involving noun-noun relation interpretation/paraphrasing, including interpreting the relations between the components of noun compounds (Butnariu et al., 2010), disambiguating preposition senses (Litkowski and Hargraves, 2007), or annotating the relation between nominals in more arbitrary constructions within the same sentence (Hendrickx et al., 2009). Whereas some of these lines of work use fixed inventories of semantic relations (Lauer, 1995; Nastase and Szpakowicz, 2003; Kim and Baldwin, 2005; Girju, 2009; Ó Séaghdha and Copestake, 2009; Tratz and Hovy, 2010), other work allows for a nearly infinite number of interpretations (Butnariu and Veale, 2008; Nakov, 2008). Recent SemEval tasks (Butnariu et al., 2009; Hendrickx et al., 2013) pursue this more open-ended strategy. In these tasks, participating systems recover the implicit predicate between the nouns in noun compounds by creating potentially unique paraphrases for each example. For instance, a system might generate the paraphrase made of for the noun com378 Feature Type Word(s) Results L R C G B N LOO OO Gloss Terms ■ 0.867 (0.04) 0.762 (0.08) Hypernyms ■ 0.870 (0.04) 0.760 (0.16) Synonyms ■ 0.873 (0.04) 0.757 (0.32) Word Itself ■ 0.871 (0.04) 0.745 (0.08) Lexnames ■ 0.871 (0.04) 0.514 (0.32) Last Letters ■ 0.870 (0.04) 0.495 (0.64) Lexnames ■ 0.872 (0.04) 0.424 (0.08) Link types ■ 0.874 (0.02) 0.398 (0.64) Link types ■ 0.870 (0.04) 0.338 (0.32) Word Itself ■ 0.870 (0.04) 0.316 (0.16) Last Letters ■ 0.872 (0.02) 0.303 (0.16) Gloss Terms ■ 0.872 (0.02) 0.271 (0.04) Hypernyms ■ 0.875 (0.02) 0.269 (0.08) Word Itself ■ 0.874 (0.02) 0.261 (0.08) Synonyms ■ 0.874 (0.02) 0.260 (0.04) Lexnames ■ 0.874 (0.02) 0.247 (0.04) Part-of-speech List ■ 0.873 (0.02) 0.245 (0.16) Part-of-speech List ■ 0.874 (0.02) 0.243 (0.16) Dependency ■ 0.872 (0.02) 0.241 (0.16) Part-of-speech List ■ 0.874 (0.02) 0.236 (0.32) Link Types ■ 0.874 (0.02) 0.236 (0.64) Word Itself ■ 0.870 (0.02) 0.234 (0.32) Assigned Part-of-Speech ■ 0.874 (0.02) 0.228 (0.08) Affixes ■ 0.873 (0.02) 0.227 (0.16) Assigned Part-of-Speech ■ 0.873 (0.02) 0.194 (0.16) Hypernyms ■ 0.873 (0.02) 0.186 (0.04) Lexnames ■ 0.870 (0.04) 0.170 (0.64) Text of Dependents ■ 0.874 (0.02) 0.156 (0.08) Parts List ■ 0.873 (0.02) 0.141 (0.16) Affixes ■ 0.870 (0.04) 0.114 (0.32) Affixes ■ 0.873 (0.02) 0.105 (0.04) Parts List ■ 0.874 (0.02) 0.103 (0.16) Table 8: Results for leave-one-out and only-one feature template ablation experiment results for all feature templates sorted by the only-one case. L, R, C, G, B, and N stand for left word (possessor), right word (possessee), pairwise combination of outputs for possessor and possessee, syntactic governor of possessee, all tokens between possessor and possessee, and the word next to the possessee (on the right), respectively. The C parameter value used to train the SVMs is shown in parentheses. pound pepperoni pizza. Computer-generated results are scored against a list of human-generated options in order to rank the participating systems. This approach could be applied to possessives interpretation as well. Concurrent with the lack of NLP research on the subject is the absence of available annotated datasets for training, evaluation, and analysis. The NomBank project (Meyers et al., 2004) provides coarse annotations for some of the possessive constructions in the Penn Treebank, but only those that meet their criteria. 7 Conclusion In this paper, we present a semantic relation inventory for ’s possessives consisting of 17 relations expressed by the English ’s construction, the largest available manually-annotated collection of possessives, and an effective method for automatically assigning the relations to unseen examples. We explain our methodology for building this inventory and dataset and report a strong level of inter-annotator agreement, reaching 0.78 Kappa overall. The resulting dataset is quite large, at 21,938 instances, and crosses multiple domains, including news, fiction, and historical non-fiction. It is the only large fully-annotated publiclyavailable collection of possessive examples that we are aware of. The straightforward SVMbased automatic classification system achieves 87.4% accuracy—the highest automatic possessive interpretation accuracy figured reported to date. These high results suggest that SVMs are a good choice for automatic possessive interpre379 tation systems, in contrast to Moldovan and Badulescu (2005) findings. The data and software presented in this paper are available for download at http://www.isi.edu/publications/licensedsw/fanseparser/index.html. 8 Future Work Going forward, we would like to examine the various ambiguities of possessives described in Section 4.3. Instead of trying to find the one-best interpretation for a given possessive example, we would like to produce a list of all appropriate intepretations. Another avenue for future research is to study variation in possessive use across genres, including scientific and technical genres. Similarly, one could automatically process large volumes of text from various time periods to investigate changes in the use of the possessive over time. Acknowledgments We would like to thank Charles Zheng and Sarah Benzel for all their annotation work and valuable feedback. References Adriana Badulescu and Dan Moldovan. 2009. A Semantic Scattering Model for the Automatic Interpretation of English Genitives. Natural Language Engineering, 15:215–239. Chris Barker. 1995. Possessive Descriptions. CSLI Publications, Stanford, CA, USA. Cristina Butnariu and Tony Veale. 2008. A ConceptCentered Approach to Noun-Compound Interpretation. In Proceedings of the 22nd International Conference on Computational Linguistics, pages 81–88. Cristina Butnariu, Su Nam Kim, Preslav Nakov, Diarmuid Ó Séaghdha, Stan Szpakowicz, and Tony Veale. 2009. SemEval-2010 Task 9: The Interpretation of Noun Compounds Using Paraphrasing Verbs and Prepositions. In DEW ’09: Proceedings of the Workshop on Semantic Evaluations: Recent Achievements and Future Directions, pages 100– 105. Cristina Butnariu, Su Nam Kim, Preslav Nakov, Diarmuid. Ó Séaghdha, Stan Szpakowicz, and Tony Veale. 2010. SemEval-2010 Task 9: The Interpretation of Noun Compounds Using Paraphrasing Verbs and Prepositions. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 39–44. Jacob Cohen. 1960. A Coefficient of Agreement for Nominal Scales. Educational and Psychological Measurement, 20(1):37–46. Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, XiangRui Wang, and Chih-Jen Lin. 2008. LIBLINEAR: A Library for Large Linear Classification. Journal of Machine Learning Research, 9:1871–1874. Christiane Fellbaum. 1998. WordNet: An Electronic Lexical Database. The MIT Press, Cambridge, MA. Joseph L. Fleiss. 1971. Measuring nominal scale agreement among many raters. Psychological Bulletin, 76(5):378–382. Edward Gibbon. 1776. The History of the Decline and Fall of the Roman Empire, volume I of The History of the Decline and Fall of the Roman Empire. Printed for W. Strahan and T. Cadell. Roxanna Girju. 2009. The Syntax and Semantics of Prepositions in the Task of Automatic Interpretation of Nominal Phrases and Compounds: A Cross-linguistic Study. Computational Linguistics - Special Issue on Prepositions in Application, 35(2):185–228. Bernd Heine. 1997. Possession: Cognitive Sources, Forces, and Grammaticalization. Cambridge University Press, United Kingdom. Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, Diarmuid Ó Séaghdha, Sebastian Padó, Marco Pennacchiotti, Lorenza Romano, and Stan Szpakowicz. 2009. Semeval-2010 task 8: Multi-Way Classification of Semantic Relations between Pairs of Nominals. In Proceedings of the Workshop on Semantic Evaluations: Recent Achievements and Future Directions (SEW-2009), pages 94–99. Iris Hendrickx, Zornitsa Kozareva, Preslav Nakov, Diarmuid Ó Séaghdha, Stan Szpakowicz, and Tony Veale. 2013. Task Description: SemEval2013 Task 4: Free Paraphrases of Noun Compounds. http://www.cs.york.ac.uk/ semeval-2013/task4/. [Online; accessed 1-May-2013]. Su Nam Kim and Timothy Baldwin. 2005. Automatic Interpretation of Noun Compounds using WordNet::Similarity. Natural Language Processing– IJCNLP 2005, pages 945–956. Rudyard Kipling. 1894. The Jungle Book. Macmillan, London, UK. George Lakoff and Mark Johnson. 1980. Metaphors We Live by. The University of Chicago Press, Chicago, USA. George Lakoff. 1987. Women, Fire, and Dangerous Things: What Categories Reveal about the Mind. The University of Chicago Press, Chicago, USA. Mark Lauer. 1995. Corpus Statistics Meet the Noun Compound: Some Empirical Results. In Proceedings of the 33rd Annual Meeting on Association for Computational Linguistics, pages 47–54. Ken Litkowski and Orin Hargraves. 2007. SemEval2007 Task 06: Word-Sense Disambiguation of Prepositions. In Proceedings of the 4th International Workshop on Semantic Evaluations, pages 24–29. 380 Sebastian Löbner. 1985. Definites. Journal of Semantics, 4(4):279. Mitchell P. Marcus, Mary A. Marcinkiewicz, and Beatrice Santorini. 1993. Building a Large Annotated Corpus of English: The Penn Treebank. Computational Linguistics, 19(2):330. Adam Meyers, Ruth Reeves, Catherine Macleod, Rachel Szekely, Veronika Zielinska, Brian Young, and Ralph Grishman. 2004. The NomBank Project: An Interim Report. In Proceedings of the NAACL/HLT Workshop on Frontiers in Corpus Annotation. Dan Moldovan and Adriana Badulescu. 2005. A Semantic Scattering Model for the Automatic Interpretation of Genitives. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 891–898. Preslav Nakov. 2008. Noun Compound Interpretation Using Paraphrasing Verbs: Feasibility Study. In Proceedings of the 13th International Conference on Artificial Intelligence: Methodology, Systems, and Applications, pages 103–117. Vivi Nastase and Stan Szpakowicz. 2003. Exploring Noun-Modifier Semantic Relations. In Fifth International Workshop on Computational Semantics (IWCS-5), pages 285–301. Kiki Nikiforidou. 1991. The Meanings of the Genitive: A Case Study in the Semantic Structure and Semantic Change. Cognitive Linguistics, 2(2):149– 206. Sumiyo Nishiguchi. 2009. Qualia-Based Lexical Knowledge for the Disambiguation of the Japanese Postposition No. In Proceedings of the Eighth International Conference on Computational Semantics. Diarmuid Ó Séaghdha and Ann Copestake. 2009. Using Lexical and Relational Similarity to Classify Semantic Relations. In Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics, pages 621–629. Barbara H. Partee and Vladimir Borschev. 1998. Integrating Lexical and Formal Semantics: Genitives, Relational Nouns, and Type-Shifting. In Proceedings of the Second Tbilisi Symposium on Language, Logic, and Computation, pages 229–241. James Pustejovsky. 1995. The Generative Lexicon. MIT Press, Cambridge, MA, USA. Randolph Quirk, Sidney Greenbaum, Geoffrey Leech, and Jan Svartvik. 1985. A Comprehensive Grammar of the English Language. Longman Inc., New York. Anette Rosenbach. 2002. Genitive Variation in English: Conceptual Factors in Synchronic and Diachronic Studies. Topics in English linguistics. Mouton de Gruyter. Sidney Siegel and N. John Castellan. 1988. Nonparametric statistics for the behavioral sciences. McGraw-Hill. John R. Taylor. 1996. Possessives in English. Oxford University Press, New York. Stephen Tratz and Eduard Hovy. 2010. A Taxonomy, Dataset, and Classifier for Automatic Noun Compound Interpretation. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 678–687. Stephen Tratz and Eduard Hovy. 2011. A Fast, Accurate, Non-Projective, Semantically-Enriched Parser. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1257–1268. Carl Vikner and Per Anker Jensen. 2002. A Semantic Analysis of the English Genitive. Interation of Lexical and Formal Semantics. Studia Linguistica, 56(2):191–226. 381
2013
37
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 382–391, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Is a 204 cm Man Tall or Small ? Acquisition of Numerical Common Sense from the Web Katsuma Narisawa1 Yotaro Watanabe1 Junta Mizuno2 Naoaki Okazaki1,3 Kentaro Inui1 1Graduate School of Information Sciences, Tohoku University 2National Institute of Information and Communications Technology (NICT) 3Japan Science and Technology Agency (JST) {katsuma,yotaro-w,junta-m,okazaki,inui}@ecei.tohoku.ac.jp Abstract This paper presents novel methods for modeling numerical common sense: the ability to infer whether a given number (e.g., three billion) is large, small, or normal for a given context (e.g., number of people facing a water shortage). We first discuss the necessity of numerical common sense in solving textual entailment problems. We explore two approaches for acquiring numerical common sense. Both approaches start with extracting numerical expressions and their context from the Web. One approach estimates the distribution of numbers co-occurring within a context and examines whether a given value is large, small, or normal, based on the distribution. Another approach utilizes textual patterns with which speakers explicitly expresses their judgment about the value of a numerical expression. Experimental results demonstrate the effectiveness of both approaches. 1 Introduction Textual entailment recognition (RTE) involves a wide range of semantic inferences to determine whether the meaning of a hypothesis sentence (h) can be inferred from another text (t) (Dagan et al., 2006). Although several evaluation campaigns (e.g., PASCAL/TAC RTE challenges) have made significant progress, the RTE community recognizes the necessity of a deeper understanding of the core phenomena involved in textual inference. Such recognition comes from the ideas that crucial progress may derive from decomposing the complex RTE task into basic phenomena and from solving each basic phenomenon separately (Bentivogli et al., 2010; Sammons et al., 2010; Cabrio and Magnini, 2011; Toledo et al., 2012). Given this background, we focus on solving one of the basic phenomena in RTE: semantic inference related to numerical expressions. The specific problem we address is acquisition of numerical common sense. For example, (1) t : Before long, 3b people will face a water shortage in the world. h : Before long, a serious water shortage will occur in the world. Although recognizing the entailment relation between t and h is frustratingly difficult, we assume this inference is decomposable into three phases: 3b people face a water shortage. ⇔ 3,000,000,000 people face a water shortage. |= many people face a water shortage. |= a serious water shortage. In the first phase, it is necessary to recognize 3b as a numerical expression and to resolve the expression 3b into the exact amount 3,000,000,000. The second phase is much more difficult because we need subjective but common-sense knowledge that 3,000,000,000 people is a large number. In this paper, we address the first and second phases of inference as an initial step towards semantic processing with numerical expressions. The contributions of this paper are four-fold. 1. We examine instances in existing RTE corpora, categorize them into groups in terms of the necessary semantic inferences, and discuss the impact of this study for solving RTE problems with numerical expressions. 2. We describe a method of normalizing numerical expressions referring to the same amount in text into a unified semantic representation. 3. We present approaches for aggregating numerical common sense from examples of numerical expressions and for judging whether a given amount is large, small, or normal. 382 4. We demonstrate the effectiveness of this approach, reporting experimental results and analyses in detail. Although it would be ideal to evaluate the impact of this study on the overall RTE task, we evaluate each phase separately. We do this because the existing RTE data sets tend to exhibit very diverse linguistic phenomena, and it is difficult to employ such data for evaluating the real impact of this study. 2 Related work Surprisingly, NLP research has paid little attention to semantic processing of numerical expressions. This is evident when we compare with temporal expressions, for which corpora (e.g., ACE20051, TimeBank2) were developed with annotation schemes (e.g., TIMEX3, TimeML4). Several studies deal with numerical expressions in the context of information extraction (Bakalov et al., 2011), information retrieval (Fontoura et al., 2006; Yoshida et al., 2010), and question answering (Moriceau, 2006). Numbers such as product prices and weights have been common targets of information extraction. Fontoura et al. (2006) and Yoshida et al. (2010) presented algorithms and data structures that allow number-range queries for searching documents. However, these studies do not interpret the quantity (e.g., 3,000,000,000) of a numerical expression (e.g., 3b people), but rather treat numerical expressions as strings. Banerjee et al. (2009) focused on quantity consensus queries, in which there is uncertainty about the quantity (e.g., weight airbus A380 pounds). Given a query, their approach retrieves documents relevant to the query and identifies the quantities of numerical expressions in the retrieved documents. They also proposed methods for enumerating and ranking the candidates for the consensus quantity intervals. Even though our study shares a similar spirit (modeling of consensus for quantities) with Banerjee et al. (2009), their goal is different: to determine ground-truth values for queries. In question answering, to help “sanity check” answers with numerical values that were 1http://www.itl.nist.gov/iad/mig/ tests/ace/ace05/ 2http://www.timeml.org/site/timebank/ timebank.html 3http://timex2.mitre.org/ 4http://timeml.org/site/index.html way out of common-sense ranges, IBM’s PIQUANT (Prager et al., 2003; Chu-Carroll et al., 2003) used information in Cyc (Lenat, 1995). For example, their question-answering system rejects 200 miles as a candidate answer for the height of Mt. Everest, since Cyc knows mountains are between 1,000 and 30,000 ft. high. They also consider the problem of variations in the precision of numbers (e.g., 5 million, 5.1 million, 5,200,390) and unit conversions (e.g., square kilometers and acres). Some recent studies delve deeper into the semantic interpretation of numerical expressions. Aramaki et al. (2007) focused on the physical size of an entity to predict the semantic relation between entities. For example, knowing that a book has a physical size of 20 cm × 25 cm and that a library has a size of 10 m × 10 m, we can estimate that a library contains a book (content-container relation). Their method acquires knowledge about entity size from the Web (by issuing queries like “book (*cm x *cm)”), and integrates the knowledge as features for the classification of relations. Davidov and Rappoport (2010) presented a method for the extraction from the Web and approximation of numerical object attributes such as height and weight. Given an object-attribute pair, the study expands the object into a set of comparable objects and then approximates the numerical values even when no exact value can be found in a text. Aramaki et al. (2007) and Davidov and Rappoport (2010) rely on hand-crafted patterns (e.g., “Object is * [unit] tall”), focusing on a specific set of numerical attributes (e.g., height, weight, size). In contrast, this study can handle any kind of target and situation that is quantified by numbers, e.g., number of people facing a water shortage. Recently, the RTE community has started to pay some attention to the appropriate processing of numerical expressions. Iftene (2010) presented an approach for matching numerical ranges expressed by a set of phrases (e.g., more than and at least). Tsuboi et al. (2011) designed hand-crafted rules for matching intervals expressed by temporal expressions. However, these studies do not necessarily focus on semantic processing of numerical expressions; thus, these studies do not normalize units of numerical expressions nor make inferences with numerical common sense. Sammons et al. (2010) reported that most systems submitted to RTE-5 failed on examples 383 where numeric reasoning was necessary. They argued the importance of aligning numerical quantities and performing numerical reasoning in RTE. LoBue and Yates (2011) identified 20 categories of common-sense knowledge that are prevalent in RTE. One of the categories comprises arithmetic knowledge (including computations, comparisons, and rounding). They concluded that many kinds of the common-sense knowledge have received scarce attention from researchers even though the knowledge is essential to RTE. These studies provided a closer look at the phenomena involved in RTE, but they did not propose a solution for handling numerical expressions. 3 Investigation of textual-entailment pairs with numerical expressions In this section, we investigate textual entailment (TE) pairs in existing corpora in order to study the core phenomena that establish an entailment relation. We used two Japanese TE corpora: RITE (Shima et al., 2011) and Odani et al. (2008). RITE is an evaluation workshop of textual entailment organized by NTCIR-9, and it targets the English, Japanese, and Chinese languages. We used the Japanese portions of the development and training data. Odani et al. (2008) is another Japanese corpus that was manually created. The total numbers of text-hypothesis (T-H) pairs are 1,880 (RITE) and 2,471 (Odani). We manually selected sentence pairs in which one or both of the sentences contained a numerical expression. Here, we define the term numerical expression as an expression containing a number or quantity represented by a numeral and a unit. For example, 3 kilometers is a numerical expression with the numeral 3 and the unit kilometer. Note that intensity of 4 is not a numerical expression because intensity is not a unit. We obtained 371 pairs from the 4,351 T-H pairs. We determined the inferences needed to prove ENTAILMENT or CONTRADICTION of the hypotheses, and classified the 371 pairs into 11 categories. Note that we ignored T-H pairs in which numerical expressions were unnecessary to prove the entailment relation (e.g., Socrates was sentenced to death by 500 jury members and Socrates was sentenced to death). Out of 371 pairs, we identified 114 pairs in which numerical expressions played a central role in the entailment relation. Table 1 summarizes the categories of TE phenomena we found in the data set. The largest category is numerical matching (32 pairs). We can infer an entailment relation in this category by aligning two numerical expressions, e.g., 2.2 million |= over 800 thousand. This is the most fundamental task in numerical reasoning, interpreting the amount (number, unit, and range) in a numerical expression. We address this task in Section 4.1. The second largest category requires common sense about numerical amounts. In order to recognize textual entailment of pairs in this category, we need common-sense knowledge about humans’ subjective judgment of numbers. We consider this problem in Section 5. To summarize, this study covers 37.9% of the instances in Table 1, focusing on the first and second categories. Due to space limitations, we omit the explanations for the other phenomena, which require such things as lexical knowledge, arithmetic operations, and counting. The coverage of this study might seem small, but it is difficult to handle varied phenomena with a unified approach. We believe that this study forms the basis for investigating other phenomena of numerical expressions in the future. 4 Collecting numerical expressions from the Web In this paper, we explore two approaches to acquiring numerical common sense. Both approaches start with extracting numerical expressions and their context from the Web. We define a context as the verb and its arguments that appear around a numerical expression. For instance, the context of 3b people in the sentence 3b people face a water shortage is “face” and “water shortage.” In order to extract and aggregate numerical expressions in various documents, we converted the numerical expressions into semantic representations (to be described in Section 4.1), and extracted their context (to be described in Section 4.2). The first approach for acquiring numerical common sense estimates the distribution of numbers that co-occur within a context, and examines whether a given value is large, small, or normal based on that distribution (to be described in Section 5.1). The second approach utilizes textual patterns with which speakers explicitly expresses their judgment about the value of a numerical ex384 Category Definition Example # Numerical matching Aligning numerical expressions in T and H, considering differences in unit, range, etc. t: It is said that there are about 2.2 million alcoholics in the whole country. h: It is estimated that there are over 800 thousand people who are alcoholics. 32 Numerical common sense Inferring by interpreting the numerical amount (large or small). t: In the middle of the 21st century, 7 billion people, corresponding to 70% of the global population, will face a water shortage. h: It is concerning that a serious water shortage will spread around the world in the near future. 12 Lexical knowledge Inferring by using numerical aspects of word meanings. t: Mr. and Ms. Sato celebrated their 25th wedding anniversary. h: Mr. and Ms. Sato celebrated their silver wedding anniversary. 12 Arithmetic Arithmetic operations including addition and subtraction. t: The number of 2,000-yen bills in circulation has increased to 450 million, in contrast with 440 million 5,000-yen bills. h: The number of 2,000-yen bills in circulation exceeds the number of 5,000-yen bills by 10 million bills. 11 Numeric-range expression of verbs Numerical ranges expressed by verbs (e.g., exceed). t: It is recorded that the maximum wave height reached 13.8 meters during the Sea of Japan Earthquake Tsunami in May 1983. h: During the Sea of Japan Earthquake, the height of the tsunami exceeded 10 meters. 9 Simple Rewrite Rule This includes various simple rules for rewriting. t: The strength of Taro’s grip is No. 1 in his class. h: Taro’s grip is the strongest in his class. 7 State change Expressing the change of a value by a multiplier or ratio. t: Consumption of pickled plums is 1.5 times the rate of 20 years ago. h: Consumption of pickled plums has increased. 6 Ordinal numbers Inference by interpreting ordinal numbers. t: Many precious lives were sacrificed in the Third World War. h: So far, there have been at least three World Wars. 6 Temporal expression Inference by interpreting temporal expressions such as anniversary, age, and ordinal numbers. t: Mr. and Ms. Sato celebrate their 25th wedding anniversary. h: Mr. and Ms. Sato got married 25 years ago. 3 Count Counting up the number of various entities. t: In Japan, there are the Asian Triopsidae, the American Triopsidae, and the European Triopsidae. h: In Japan, there are 3 types of Triopsidae. 3 Others 15 All 116 Table 1: Frequency and simple definitions for each category of the entailment phenomena in the survey. Numerical Semantic representation Expression Value Unit Mod. about seven grams 7 g about roughly 7 kg 7000 g about as heavy as 7 tons 7 × 106 g large as cheap as $1 1 $ small 30–40 people [30, 40] nin (people) more than 30 cars 30 dai (cars) over 7 km per hour 7000 m/h Table 2: Normalized representation examples pression (to be explained in Section 5.2). In this study, we acquired numerical common sense from a collection of 8 billion sentences in 100 million Japanese Web pages (Shinzato et al., 2012). For this reason, we originally designed text patterns specialized for Japanese dependency trees. For the sake of the readers’ understanding, this paper uses examples with English translations for explaining language-independent concepts, and both Japanese and English translations for explaining language-dependent concepts. 4.1 Extracting and normalizing numerical expressions The first step for collecting numerical expressions is to recognize when a numerical expression is mentioned and then to normalize it into a semantic representation. This is the most fundamental String Operation gram(s) set-unit: ‘g’ kilogram(s) set-unit: ‘g’; multiply-value: 1,000 kg set-unit: ‘g’; multiply-value: 1,000 ton(s) set-unit: ‘g’; multiply-value: 1,000,000 nin (people) set-unit: ‘nin’ (person) about set-modifier: ‘about’ as many as set-modifier: ‘large’ as little as set-modifier: ‘small’ Table 3: An example of unit/modifier dictionary step in numerical reasoning and has a number of applications. For example, this step handles cases of numerical matching, as in Table 1. The semantic representation of a numerical expression consists of three fields: the value or range of the real number(s)5, the unit (a string), and the optional modifiers. Table 2 shows some examples of numerical expressions and their semantic representations. During normalization, we identified spelling variants (e.g., kilometer and km) and transformed auxiliary units into their corresponding canonical units (e.g., 2 tons and 2,000 kg to 2,000,000 grams). When a numerical expression is accompanied by a modifier such as over, about, or more than, we updated the value and modifier fields appropriately. 5Internally, all values are represented by ranges (e.g., 75 is represented by the range [75, 75]). 385 We developed an extractor and a normalizer for Japanese numerical expressions6. We will outline the algorithm used in the normalizer with an example sentence: “Roughly three thousand kilograms of meats have been provided every day.” 1. Find numbers in the text by using regular expressions and convert the non-Arabic numbers into their corresponding Arabic numbers. For example, we find three thousand7 and represent it as 3, 000. 2. Check whether the words that precede or follow the number are units that are registered in the dictionary. Transform any auxiliary units. In the example, we find that kilograms8 is a unit. We multiply the value 3, 000 by 1, 000, and obtain the value 3, 000, 000 with the unit g. 3. Check whether the words that precede or follow the number have a modifier that is registered in the dictionary. Update the value and modifier fields if necessary. In the example, we find roughly and set about in the modifier field. We used a dictionary9 to perform procedures 2 and 3 (Table 3). If the words that precede or follow an extracted number match an entry in the dictionary, we change the semantic representation as described in the operation. The modifiers ‘large’ and ‘small’ require elaboration because the method in Section 5.2 relies heavily on these modifiers. We activated the modifier ‘large’ when a numerical expression occurred with the Japanese word mo, which roughly corresponds to as many as, as large as, or as heavy as in English10. Similarly, we activated the modifier ‘small’ when a numerical expression occurred with the word shika, which roughly corresponds to as little as, as small as, or as light as11. These modifiers are important for this study, reflecting the writer’s judgment about the amount. 6The software is available at http://www.cl. ecei.tohoku.ac.jp/∼katsuma/software/ normalizeNumexp/ 7In Japanese 3, 000 is denoted by the Chinese symbols “ 三千”. 8We write kilograms as “キログラム” in Japanese. 9The dictionary is bundled with the tool. See Footnote 6. 10In Japanese, we can use the word mo with a numerical expression to state that the amount is ‘large’ regardless of how large it is (e.g., large, big, many, heavy). 11Similarly, we can use the word shika with any adjective. ᙼࡣ 㖟⾜࡛཭㐩࡟ 㸱㸮㸮ࢻࣝΏࡋࡓ㸬 He gave to a friend $300 at the bank. Japanese: English: nsubj dobj prep_to prep_at Number: {value: 300; unit: ‘$’ } Context: {verb: ‘give’ ; nsubj: ‘he’ ; prep_to: ‘friend’ ; prep_at: ‘bank’ } Figure 1: Example of context extraction 4.2 Extraction of context The next step in acquiring numerical common sense is to capture the context of numerical expressions. Later, we will aggregate numbers that share the same context (see Section 5). The context of a numerical expression should provide sufficient information to determine what it measures. For example, given the sentence, “He gave $300 to a friend at the bank,” it would be better if we could generalize the context to someone gives money to a friend for the numerical expression $300. However, it is a nontrivial task to design an appropriate representation of varying contexts. For this reason, we employ a simple rule to capture the context of numerical expressions: we represent the context with the verb that governs the numerical expression and its typed arguments. Figure 1 illustrates the procedure for extracting the context of a numerical expression12. The component in Section 4.1 recognizes $300 as a numerical expression, then normalizes it into a semantic representation. Because the numerical expression is a dependent of the verb gave, we extract the verb and its arguments (except for the numerical expression itself) as the context. After removing inflections and function words from the arguments, we obtain the context representation of Figure 1. 5 Acquiring numerical common sense In this section, we present two approaches for acquiring numerical common sense from a collection of numerical expressions and their contexts. Both approaches start with collecting the numbers (in semantic representation) and contexts of numerical expressions from a large number of sentences (Shinzato et al., 2012), and storing them 12The English dependency tree might look peculiar because it is translated from the Japanese dependency tree. 386 in a database. When a context and a value are given for a prediction (hereinafter called the query context and query value, respectively), these approaches judge whether the query value is large, small, or normal. 5.1 Distribution-based approach Given a query context and query value, this approach retrieves numbers associated with the query context and draws a distribution of normalized numbers. This approach considers the distribution estimated for the query context and determines if the value is within the top 5 percent (large), within the bottom 5 percent (small), or is located in between these regions (normal). The underlying assumption of this approach is that the real distribution of a query (e.g., money given to a friend) can be approximated by the distribution of numbers co-occurring with the context (e.g., give and friend) on the Web. However, the context space generated in Section 4.2 may be too sparse to find numbers in the database, especially when a query context is fine-grained. Therefore, when no item is retrieved for the query context, we employ a backoff strategy to drop some of the uninformative elements in the query context: elements are dropped from the context based on the type of argument, in this order: he (prep to), kara (prep from), ha (nsubj), yori (prep from), made (prep to), nite (prep at), de (prep at, prep by), ni (prep at), wo (dobj), ga (nsubj), and verb. 5.2 Clue-based approach This approach utilizes textual clues with which a speaker explicitly expresses his or her judgment about the amount of a numerical expression. We utilize large and small modifiers (described in Section 4.1), which correspond to textual clues mo (as many as, as large as) and shika (only, as few as), respectively, for detecting humans’ judgments. For example, we can guess that $300 is large if we find an evidential sentence13, He gave as much as $100 to a friend. Similarly to the distribution-based approach, this approach retrieves numbers associated with the query context. This approach computes the 13Although the sentence states a judgment about $100, we can infer that $300 is also large because $300 > $100. largeness L(x) of a value x: L(x) = pl(x) ps(x) + pl(x), (1) pl(x) = {r|rv < x ∧rm ∋large} {r|rm ∋large} , (2) ps(x) = {r|rv > x ∧rm ∋small} {r|rm ∋small} . (3) In these equations, r denotes a retrieved item for the query context, and rv and rm represent the normalized value and modifier flags, respectively, of the item r. The numerator of Equation 2 counts the number of numerical expressions that support the judgment that x is large14, and its denominator counts the total number of numerical expressions with large as a modifier. Therefore, pl(x) computes the ratio of times there is textual evidence that says that x is large, to the total number of times there is evidences with large as a modifier. In an analogous way, ps(x) is defined to be the ratio for evidence that says x is small. Hence, L(x) approaches 1 if everyone on the Web claims that x is large, and approaches 0 if everyone claims that x is small. This approach predicts large if L(x) > 0.95, small if L(x) < 0.05, and normal otherwise. 6 Experiments 6.1 Normalizing numerical expressions We evaluated the method that we described in Section 4.1 for extracting and normalizing numerical expressions. In order to prepare a gold-standard data set, we obtained 1,041 sentences by randomly sampling about 1% of the sentences containing numbers (Arabic digits and/or Chinese numerical characters) in a Japanese Web corpus (100 million pages) (Shinzato et al., 2012). For every numerical expression in these sentences, we manually determined a tuple of the normalized value, unit, and modifier. Here, non-numerical expressions such as temporal expressions, telephone numbers, and postal addresses, which were very common, were beyond the scope of the project15. We obtained 329 numerical expressions from the 1,041 sentences. We evaluated the correctness of the extraction and normalization by measuring the precision and 14This corresponds to the events where we find an evidence expression “as many as rv”, where rv < x. 15If a tuple was extracted from a non-numerical expression, we regarded this as a false positive 387 recall using the gold-standard data set16. Our method performed with a precision of 0.78 and a recall of 0.92. Most of the false negatives were caused by the incompleteness of the unit dictionary. For example, the proposed method could not identify 1Ghz as a numerical expression because the unit dictionary did not register Ghz but GHz. It is trivial to improve the recall of the method by enriching the unit dictionary. The major cause of false positives was the semantic ambiguity of expressions. For example, the proposed method identified Seven Hills as a numerical expression although it denotes a location name. In order to reduce false positives, it may be necessary to utilize broader contexts when locating numerical expressions; this could be done by using, for example, a named entity recognizer. This is the next step to pursue in future work. However, these errors do not have a large effect on the estimation of the distribution of the numerical values that occur with specific named entities and idiomatic phrases. Moreover, as explained in Section 5, we draw distributions for fine-grained contexts of numerical expressions. For these reasons, we think that the current performance is sufficient for acquiring numerical common sense. 6.2 Acquisition of numerical common sense 6.2.1 Preparing an evaluation set We built a gold-standard data set for numerical common sense. We applied the method in Section 4.1 to sentences sampled at random from the Japanese Web corpus (Shinzato et al., 2012), and we extracted 2,000 numerical expressions. We asked three human judges to annotate every numerical expression with one of six labels, small, relatively small, normal, relatively large, large, and unsure. The label relatively small could be applied to a numerical expression when the judge felt that the amount was rather small (below the normal) but hesitated to label it small. The label relatively large was defined analogously. We gave the following criteria for labeling an item as unsure: when the judgment was highly dependent on the context; when the sentence was incomprehensible; and when it was a non-numerical expressions (false positives of the method are discussed in Section 4.1). Table 4 reports the inter-annotator agreement. 16All fields (value, unit, modifier) of the extracted tuple must match the gold-standard data set. Agreement # expressions 3 annotators 735 (36.7%) 2 annotators 963 (48.2%) no agreement 302 (15.1%) Total 2000 (100.0%) Table 4: Inter-annotator agreement 0" 100" 200" 300" 400" 500" 0" 100" 200" 130" 140" 150" 160" 170" 180" 190" 200" 210" [cm] distribu7on:based" clue:based(large)" clue:based(small)" [#"extrac7on]" (distribu7on:based) [#"extrac7on]" (clue:based) Figure 2: Distributions of numbers with large and small modifiers for the context human’s height. For the evaluation of numerical expressions in the data set, we used those for which at least two annotators assigned the same label. After removing the unsure instances, we obtained 640 numerical expressions (20 small, 35 relatively small, 152 normal, 263 relatively large, and 170 large) as the evaluation set. 6.2.2 Results The proposed method extracted about 23 million pairs of numerical expressions and their context from the corpus (with 100 million Web pages). About 15% of the extracted pairs were accompanied by either a large or small modifier. Figure 2 depicts the distributions of the context human’s height produced by the distribution-based and clue-based approaches. These distributions are quite reasonable as common-sense knowledge: we can interpret that numbers under 150 cm are perceived as small and those above 180 cm as large. We measured the correctness of the proposed methods on the gold-standard data. For this evaluation, we employed two criteria for correctness: strict and lenient. With the strict criterion, the method must predict a label identical to that in the gold-standard. With the lenient criterion, the method was also allowed to predict either large/small or normal when the gold-standard label was relatively large/small. Table 5 reports the precision (P), recall (R), F1 (F1), and accuracy (Acc) of the proposed methods. 388 No. System Gold Sentence Remark 1 small small I think that three men can create such a great thing in the world. Correct 2 normal normal I have two cats. Correct 3 large large It’s above 32 centigrade. Correct 4 large large I earned 10 million yen from horse racing. Correct 5 small normal There are 2 reasons. Difficulty in judging small. Since a few people say, “There are only 2 reasons,” our approach predicted a small label. 6 small large Ten or more people came, and my eight-mat room was packed. Difficulty in modeling the context because this sentence omits the locational argument for the verb came. We should extract the context as the number of people who came to my eight-mat room instead of the number of people who came. 7 small normal I have two friends who have broken up with their boyfriends recently. Difficulty in modeling the context. We should extract context as the number of friends who have broken up with their boyfriends recently instead of the number of friends. 8 small large Lack of knowledge. We extract the context as the number of heads of a turtle, but no corresponding information was found on the Web. Table 6: Output example and error analysis. We present translations of the sentences, which were originally in Japanese. Approach Label P R F1 Acc large+ 0.892 0.498 0.695 Distribution normal+ 0.753 0.935 0.844 0.760 small+ 0.273 0.250 0.262 large 0.861 0.365 0.613 Distribution normal 0.529 0.908 0.719 0.590 small 0.222 0.100 0.161 large+ 0.923 0.778 0.851 Clue normal+ 0.814 0.765 0.790 0.770 small+ 0.228 0.700 0.464 large 0.896 0.659 0.778 Clue normal 0.593 0.586 0.590 0.620 small 0.164 0.550 0.357 Table 5: Precision (P), recall (R), F1 score (F1), and accuracy (Acc) of the acquisition of numerical common sense. Labels with the suffix ‘+’ correspond to the lenient criterion. The clue-based approach achieved 0.851 F1 (for large), 0.790 F1 (for normal), and 0.464 (for small) with the lenient criterion. The performance is surprisingly good, considering the subjective nature of this task. The clue-based approach was slightly better than the distribution-based approach. In particular, the clue-based approach is good at predicting large and small labels, whereas the distributionbased approach is good at predicting normal labels. We found some targets for which the distribution on the Web is skewed from the ‘real’ distribution. For example, let us consider the distribution of the context ”the amount of money that a person wins in a lottery”. We can find a number of sentences like if you won the 10-million-dollar lottery, .... In other words, people talk about a large amount of money even if they did not win any money at all. In order to remedy this problem, we may need to enrich the context representation by introducing, for example, the factuality of an event. 6.2.3 Discussion Table 6 shows some examples of predictions from the clue-based approach. Because of space limitations, we mention only the false instances of this approach. The clue-based approach tends to predict small even if the gold-standard label is normal. About half of the errors of the clue-based approach were of this type; this is why the precision for small and the recall for normal are low. The cause of this error is exemplified by the sentence, “there are two reasons.” Human judges label normal to the numerical expression two reasons, but the method predicts small. This is because a few people say there are only two reasons, but no one says there are as many as two reasons. In order to handle these cases, we may need to incorporate the distribution information with the clue-based approach. We found a number of examples for which modeling the context is difficult. Our approach represents the context of a numerical expression with the verb that governs the numerical expression and its typed arguments. However, this approach sometimes misses important information, especially when an argument of the verb is omitted (Example 6). The approach also suffers from the relative clause in Example 7, which conveys an essential context of the number. These are similar to the scope-ambiguity problem such as encoun389 tered with negation and quantification; it is difficult to model the scope when a numerical expression refers to a situation. Furthermore, we encountered some false examples even when we were able to precisely model the context. In Example 8, the proposed method was unable to predict the label correctly because no corresponding information was found on the Web. The proposed method might more easily predict a label if we could generalize the word turtle as animal. It may be worth considering using language resources (e.g., WordNet) to generalize the context. 7 Conclusions We proposed novel approaches for acquiring numerical common sense from a collection of texts. The approaches collect numerical expressions and their contexts from the Web, and acquire numerical common sense by considering the distributions of normalized numbers and textual clues such as mo (as many as) and shika (only, as few as). The experimental results showed that our approaches can successfully judge whether a given amount is large, small, or normal. The implementations and data sets used in this study are available on the Web17. We believe that acquisition of numerical common sense is an important step towards a deeper understanding of inferences with numbers. There are three important future directions for this research. One is to explore a more sophisticated approach for precisely modeling the contexts of numbers. Because we confirmed in this paper that these two approaches have different characteristics, it would be interesting to incorporate textual clues into the distribution-based approach by using, for example, machine learning techniques. Finally, we are planning to address the ‘third phase’ of the example explained in Section 1: associating many people face a water shortage with a serious water shortage. Acknowledgments This research was partly supported by JST, PRESTO. This research was partly supported by JSPS KAKENHI Grant Numbers 23240018 and 23700159. 17http://www.cl.ecei.tohoku.ac.jp/ ∼katsuma/resource/numerical common sense/ References Eiji Aramaki, Takeshi Imai, Kengo Miyo, and Kazuhiko Ohe. 2007. Uth: Svm-based semantic relation classification using physical sizes. In Proceedings of the 4th International Workshop on Semantic Evaluations, pages 464–467. Anton Bakalov, Ariel Fuxman, Partha Pratim Talukdar, and Soumen Chakrabarti. 2011. SCAD: collective discovery of attribute values. In Proceedings of the 20th international conference on World wide web, WWW ’11, pages 447–456. Somnath Banerjee, Soumen Chakrabarti, and Ganesh Ramakrishnan. 2009. Learning to rank for quantity consensus queries. In Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval, SIGIR ’09, pages 243–250. Luisa Bentivogli, Elena Cabrio, Ido Dagan, Danilo Giampiccolo, Medea Lo Leggio, and Bernardo Magnini. 2010. Building textual entailment specialized data sets: a methodology for isolating linguistic phenomena relevant to inference. Proceedings of the Seventh International Conference on Language Resources and Evaluation, pages 3542–3549. Elena Cabrio and Bernardo Magnini. 2011. Towards component-based textual entailment. In Proceedings of the Ninth International Conference on Computational Semantics, IWCS ’11, pages 320–324. Jennifer Chu-Carroll, David A. Ferrucci, John M. Prager, and Christopher A. Welty. 2003. Hybridization in question answering systems. In New Directions in Question Answering’03, pages 116–121. Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The pascal recognising textual entailment challenge. In Machine Learning Challenges. Evaluating Predictive Uncertainty, Visual Object Classification, and Recognising Tectual Entailment, pages 177–190. Dmitry Davidov and Ari Rappoport. 2010. Extraction and approximation of numerical attributes from the web. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1308–1317. Marcus Fontoura, Ronny Lempel, Runping Qi, and Jason Zien. 2006. Inverted index support for numeric search. Internet Mathematics, 3(2):153–185. Adrian Iftene and Mihai-Alex Moruz. 2010. UAIC participation at RTE-6. In Proceedings of the Third Text Analysis Conference (TAC 2010) November. Douglas B Lenat. 1995. Cyc: A large-scale investment in knowledge infrastructure. Communications of the ACM, 38(11):33–38. Peter LoBue and Alexander. Yates. 2011. Types of common-sense knowledge needed for recognizing 390 textual entailment. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers-Volume 2, pages 329–334. V´eronique Moriceau. 2006. Generating intelligent numerical answers in a question-answering system. In Proceedings of the Fourth International Natural Language Generation Conference, INLG ’06, pages 103–110. Michitaka Odani, Tomohide Shibata, Sadao Kurohashi, and Takayuki Nakata. 2008. Building data of japanese text entailment and recognition of inferencing relation based on automatic achieved similar expression. In Proceeding of 14th Annual Meeting of the Association for ”atural Language Processing, pages 1140–1143. John M. Prager, Jennifer Chu-Carroll, Krzysztof Czuba, Christopher A. Welty, Abraham Ittycheriah, and Ruchi Mahindru. 2003. IBM’s PIQUANT in TREC2003. In TREC, pages 283–292. Mark Sammons, Vinod V.G. Vydiswaran, and Dan Roth. 2010. Ask not what textual entailment can do for you... In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1199–1208. Hideki Shima, Hiroshi Kanayama, Cheng-Wei Lee, Chuan-Jie Lin, Teruko Mitamura, Yusuke Miyao, Shuming Shi, and Koichi Takeda. 2011. Overview of ntcir-9 rite: Recognizing inference in text. In Proceeding of NTCIR-9 Workshop Meeting, pages 291– 301. Keiji Shinzato, Tomohide Shibata, Daisuke Kawahara, and Sadao Kurohashi. 2012. Tsubaki: An open search engine infrastructure for developing information access methodology. Journal of Information Processing, 20(1):216–227. Assaf Toledo, Sophia Katrenko, Stavroula Alexandropoulou, Heidi Klockmann, Asher Stern, Ido Dagan, and Yoad Winter. 2012. Semantic annotation for textual entailment recognition. In Proceedings of the 11th Mexican International Conference on Artificial Intelligence, MICAI ’12. Yuta Tsuboi, Hiroshi Kanayama, Masaki Ohno, and Yuya Unno. 2011. Syntactic difference based approach for ntcir-9 rite task. In Proceedings of the 9th NTCIR Workshop, pages 404–411. Minoru Yoshida, Issei Sato, Hiroshi Nakagawa, and Akira Terada. 2010. Mining numbers in text using suffix arrays and clustering based on dirichlet process mixture models. Advances in Knowledge Discovery and Data Mining, pages 230–237. 391
2013
38
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 392–401, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Probabilistic Domain Modelling With Contextualized Distributional Semantic Vectors Jackie Chi Kit Cheung University of Toronto 10 King’s College Rd., Room 3302 Toronto, ON, Canada M5S 3G4 [email protected] Gerald Penn University of Toronto 10 King’s College Rd., Room 3302 Toronto, ON, Canada M5S 3G4 [email protected] Abstract Generative probabilistic models have been used for content modelling and template induction, and are typically trained on small corpora in the target domain. In contrast, vector space models of distributional semantics are trained on large corpora, but are typically applied to domaingeneral lexical disambiguation tasks. We introduce Distributional Semantic Hidden Markov Models, a novel variant of a hidden Markov model that integrates these two approaches by incorporating contextualized distributional semantic vectors into a generative model as observed emissions. Experiments in slot induction show that our approach yields improvements in learning coherent entity clusters in a domain. In a subsequent extrinsic evaluation, we show that these improvements are also reflected in multi-document summarization. 1 Introduction Detailed domain knowledge is crucial to many NLP tasks, either as an input for language understanding, or as the goal itself, to acquire such knowledge. For example, in information extraction, a list of slots in the target domain is given to the system, and in natural language generation, content models are trained to learn the content structure of texts in the target domain for information structuring and automatic summarization. Generative probabilistic models have been one popular approach to content modelling. An important advantage of this approach is that the structure of the model can be adapted to fit the assumptions about the structure of the domain and the nature of the end task. As this field has progressed, the formal structures that are assumed to represent a domain have increased in complexity and become more hierarchical. Earlier work assumes a flat set of topics (Barzilay and Lee, 2004), which are expressed as states of a latent random variable in the model. Later work organizes topics into a hierarchy from general to specific (Haghighi and Vanderwende, 2009; Celikyilmaz and Hakkani-Tur, 2010). Recently, Cheung et al. (2013) formalized a domain as a set of frames consisting of prototypical sequences of events, slots, and slot fillers or entities, inspired by classical AI work such as Schank and Abelson’s (1977) scripts. We adopt much of this terminology in this work. For example, in the CRIMINAL INVESTIGATIONS domain, there may be events such as a murder, an investigation of the crime, an arrest, and a trial. These would be indicated by event heads such as kill, arrest, charge, plead. Relevant slots would include VICTIM, SUSPECT, AUTHORITIES, PLEA, etc. One problem faced by this line of work is that, by their nature, these models are typically trained on a small corpus from the target domain, on the order of hundreds of documents. The small size of the training corpus makes it difficult to estimate reliable statistics, especially for more powerful features such as higher-order N-gram features or syntactic features. By contrast, distributional semantic models are trained on large, domain-general corpora. These methods model word meaning using the contexts in the training corpus in which the word appears. The most popular approach today is a vector space representation, in which each dimension corresponds to some context word, and the value at that dimension corresponds to the strength of the association between the context word and the target word being modelled. A notion of word similarity arises naturally from these models by comparing the similarity of the word vectors, for example by using a cosine measure. Recently, these models have been extended by considering how distribu392 tional representations can be modified depending on the specific context in which the word appears (Mitchell and Lapata, 2008, for example). Contextualization has been found to improve performance in tasks like lexical substitution and word sense disambiguation (Thater et al., 2011). In this paper, we propose to inject contextualized distributional semantic vectors into generative probabilistic models, in order to combine their complementary strengths for domain modelling. There are a number of potential advantages that distributional semantic models offer. First, they provide domain-general representations of word meaning that cannot be reliably estimated from the small target-domain corpora on which probabilistic models are trained. Second, the contextualization process allows the semantic vectors to implicitly encode disambiguated word sense and syntactic information, without further adding to the complexity of the generative model. Our model, the Distributional Semantic Hidden Markov Model (DSHMM), incorporates contextualized distributional semantic vectors into a generative probabilistic model as observed emissions. We demonstrate the effectiveness of our model in two domain modelling tasks. First, we apply it to slot induction on guided summarization data over five different domains. We show that our model outperforms a baseline version of our method that does not use distributional semantic vectors, as well as a recent state-of-the-art template induction method. Then, we perform an extrinsic evaluation using multi-document summarization, wherein we show that our model is able to learn event and slot topics that are appropriate to include in a summary. From a modelling perspective, these results show that probabilistic models for content modelling and template induction benefit from distributional semantics trained on a much larger corpus. From the perspective of distributional semantics, this work broadens the variety of problems to which distributional semantics can be applied, and proposes methods to perform inference in a probabilistic setting beyond geometric measures such as cosine similarity. 2 Related Work Probabilistic content models were proposed by Barzilay and Lee (2004), and related models have since become popular for summarization (Fung and Ngai, 2006; Haghighi and Vanderwende, 2009), and information ordering (Elsner et al., 2007; Louis and Nenkova, 2012). Other related generative models include topic models and structured versions thereof (Blei et al., 2003; Gruber et al., 2007; Wallach, 2008). In terms of domain learning in the form of template induction, heuristic methods involving multiple clustering steps have been proposed (Filatova et al., 2006; Chambers and Jurafsky, 2011). Most recently, Cheung et al. (2013) propose PROFINDER, a probabilistic model for frame induction inspired by content models. Our work is similar in that we assume much of the same structure within a domain and consequently in the model as well (Section 3), but whereas PROFINDER focuses on finding the “correct” number of frames, events, and slots with a nonparametric method, this work focuses on integrating global knowledge in the form of distributional semantics into a probabilistic model. We adopt one of their evaluation procedures and use it to compare with PROFINDER in Section 5. Vector space models form the basis of modern information retrieval (Salton et al., 1975), but only recently have distributional models been proposed that are compositional (Mitchell and Lapata, 2008; Clark et al., 2008; Grefenstette and Sadrzadeh, 2011, inter alia), or that contextualize the meaning of a word using other words in the same phrase (co-compositionality) (Erk and Pad´o, 2008; Dinu and Lapata, 2010; Thater et al., 2011). We recently showed how such models can be evaluated for their ability to support semantic inference for use in complex NLP tasks like question answering or automatic summarization (Cheung and Penn, 2012). Combining distributional information and probabilistic models has actually been explored in previous work. Usually, an ad-hoc clustering step precedes training and is used to bias the initialization of the probabilistic model (Barzilay and Lee, 2004; Louis and Nenkova, 2012), or the clustering is interleaved with iterations of training (Fung et al., 2003). By contrast, our method better modularizes the two, and provides a principled way to train the model. More importantly, previous adhoc clustering methods only use distributional information derived from the target domain itself; initializing based on domain-general distributional information can be problematic because it can bias training towards a local optimum that is inappropriate for the target domain, leading to poor per393 𝐶1 𝐸1 𝐻1 𝑆1 𝐴1 𝐷 𝐶𝑇 𝐸𝑇 𝐻𝑇 𝑆𝑇 𝐴𝑁 . . . 𝑁𝑆 𝑃𝑠𝑒𝑚 𝐴 𝑃𝑙𝑒𝑚𝑚 𝐴 𝑁𝐸 𝑃𝑠𝑒𝑚 𝐻 𝑃𝑙𝑒𝑚𝑚 𝐻 Figure 1: Graphical representation of our model. Distributions that generate the latent variables and hyperparameters are omitted for clarity. formance. 3 Distributional Semantic Hidden Markov Models We now describe the DSHMM model. This model can be thought of as an HMM with two layers of latent variables, representing events and slots in the domain. Given a document consisting of a sequence of T clauses headed by propositional heads ⃗H (verbs or event nouns), and argument noun phrases ⃗A, a DSHMM models the joint probability of observations ⃗H, ⃗A, and latent random variables ⃗E and ⃗S representing domain events and slots respectively; i.e., P( ⃗H, ⃗A, ⃗E, ⃗S). The basic structure of our model is similar to PROFINDER. Each timestep in the model generates one clause in the document. More specifically, it generates the event heads and arguments which are crucial in identifying events and slots. We assume that event heads are verbs or event nouns, while arguments are the head words of their syntactically dependent noun phrases. We also assume that the sequence of clauses and the clauseinternal syntactic structure are fixed, for example by applying a dependency parser. Within each clause, a hierarchy of latent and observed variables maps to corresponding elements in the clause (Table 1), as follows: Event Variables At the top-level, a categorical latent variable Et with NE possible states represents the event that is described by clause t. Its value is conditioned on the previous time step’s event variable, following the standard, first-order Markov assumption (P E(Et|Et−1), or P E init(E1) Node Component Textual unit Et Event Clause Sta Slot Noun phrase Ht Event head Verb/event noun Ata Event argument Noun phrase Table 1: The correspondence between nodes in our graphical model, the domain components that they model, and the related elements in the clause. for the first clause). The internal structure of the clause is generated by conditioning on the state of Et, including the head of the clause, and the slots for each argument in the clause. Slot Variables Categorical latent variables with NS possible states represent the slot that an argument fills, and are conditioned on the event variable in the clause, Et (i.e., P S(Sta|Et), for the ath slot variable). The state of Sta is then used to generate an argument Ata. Head and Argument Emissions The head of the clause Ht is conditionally dependent on Et, and each argument Ata is likewise conditioned on its slot variable Sta. Unlike in most applications of HMMs in text processing, in which the representation of a token is simply its word or lemma identity, tokens in DSHMM are also associated with a vector representation of their meaning in context according to a distributional semantic model (Section 3.1). Thus, the emissions can be decomposed into pairs Ht = (lemma(Ht), sem(Ht)) and Ata = (lemma(Ata), sem(Ata)), where lemma and sem are functions that return the lemma identity and the semantic vector respectively. The probability of the head of a clause is thus: P H(Ht|Et) = P H lemm(lemma(Ht)|Et) (1) × P H sem(sem(Ht)|Et), and the probability of a clausal argument is likewise: P A(Ata|Sta) = P A lemm(lemma(Ata)|Sta) (2) × P A sem(sem(Ata)|Sta). All categorical distributions are smoothed using add-δ smoothing (i.e., uniform Dirichlet priors). Based on the independence assumptions described above, the joint probability distribution can be fac394 tored into: P( ⃗H, ⃗A, ⃗E, ⃗S) = P E init(E1) (3) × T Y t=2 P E(Et|Et−1) T Y t=1 P H(Ht|Et) × T Y t=1 Ct Y a=1 P S(Sta|Et)P A(Ata|Sta). 3.1 Vector Space Models of Semantics In this section, we describe several methods for producing the semantic vectors associated with each event head or argument; i.e., the function sem. We chose several simple, but widely studied models, to investigate whether they can be effectively integrated into DSHMM. We start with a description of the training of a basic model without any contextualization, then describe several contextualized models based on recent work. Simple Vector Space Model In the basic version of the model (SIMPLE), we train a termcontext matrix, where rows correspond to target words, and columns correspond to context words. Training begins by counting context words that appear within five words of the target word, ignoring stopwords. We then convert the raw counts to positive pointwise mutual information scores, which has been shown to improve word similarity correlation results (Turney and Pantel, 2010). We set thresholds on the frequencies of words for inclusion as target and context words (given in Section 4). Target words which fall below the threshold are modelled as UNK. All the methods below start from this basic vector representation. Component-wise Operators Mitchell and Lapata (2008) investigate using component-wise operators to combine the vectors of verbs and their intransitive subjects. We use component-wise operators to contextualize our vectors, but by combining with all of the arguments, and regardless of the event head’s category. Let event head h be the syntactic head of a number of arguments a1, a2, ...am, and ⃗vh,⃗va1,⃗va2, ...⃗vam be their respective vector representations according to the SIMPLE method. Then, their contextualized vectors ⃗cM&L h ,⃗cM&L a1 , ...⃗cM&L am would be: ⃗cM&L h = ⃗vh ⊙( m K i=1 ⃗vam) (4) ⃗cM&L ai = ⃗vai ⊙⃗vh, ∀i = 1...m, (5) where ⊙represents a component-wise operator, addition or multiplication, and J represents its repeated application. We tested component-wise addition (M&L+) and multiplication (M&L×). Selectional Preferences Erk and Pad´o (2008) (E&P) incorporate inverse selectional preferences into their contextualization function. The intuition is that a word should be contextualized such that its vector representation becomes more similar to the vectors of other words that its dependency neighbours often take in the same syntactic position. For example, suppose catch is the head of the noun ball, in the relation of a direct object. Then, the vector for ball would be contextualized to become similar to the vectors for other frequent direct objects of catch, such as baseball, or cold. Likewise, the vector for catch would be contextualized to become similar to the vectors for throw, hit, etc. Formally, let h take a as its argument in relation r. Then: ⃗cE&P h = ⃗vh × m Y i=1 X w∈L freq(w, r, ai) · ⃗vw, (6) ⃗cE&P a = ⃗va × X w∈L freq(h, r, w) · ⃗vw, (7) where freq(h, r, a) is the frequency of h occurring as the head of a in relation r in the training corpus, L is the lexicon, and × represents component-wise multiplication. Dimensionality Reduction and Vector Emission After contextualization, we apply singular value decomposition (SVD) for dimensionality reduction to reduce the number of model parameters, keeping the k most significant singular values and vectors. In particular, we apply SVD to the m-byn term-context matrix M produced by the SIMPLE method, resulting in the truncated matrices M ≈UkΣkV T k , where Uk is a m-by-k matrix, Σk is k-by-k, and Vk is n-by-k. This takes place after contextualization, so the component-wise operators apply in the original semantic space. Afterwards, the contextualized vector in the original space, ⃗c, can be transformed into a vector in the reduced space, ⃗cR, by ⃗cR = Σ−1 k V T k ⃗c. Distributional semantic vectors are traditionally compared by measures which ignore vector magnitudes, such as cosine similarity, but a multivariate Gaussian is sensitive to magnitudes. Thus, the final step is to normalize ⃗cR into a unit vector by dividing it by its L2 norm, ||⃗cR||. 395 We model the emission of these contextualized vectors in DSHMM as multivariate Gaussian distributions, so the semantic vector emissions can be written as P H sem, P A sem ∼N(µ, Σ), where µ ∈Rk is the mean and Σ ∈Rk×k is the covariance matrix. To avoid overfitting, we regularize the covariance using its conjugate prior, the InverseWishart distribution. We follow the “neutral” setting of hyperparameters given by Ormoneit and Tresp (1995), so that the MAP estimate for the covariance matrix for (event or slot) state i becomes: Σi = P j rij(xj −µi)(xj −µi)T + βI P j rij + 1 , (8) where j indexes all the relevant semantic vectors xj in the training set, rij is the posterior responsibility of state i for vector xj, and β is the remaining hyperparameter that we tune to adjust the amount of regularization. To further reduce model complexity, we set the off-diagonal entries of the resulting covariance matrix to zero. 3.2 Training and Inference Inference in DSHMM is accomplished by the standard Inside-Outside and tree-Viterbi algorithms, except that the tree structure is fixed, so there is no need to sum over all possible subtrees. Model parameters are learned by the ExpectationMaximization (EM) algorithm. We tune the hyperparameters (NE, NS, δ, β, k) and the number of EM iterations by two-fold cross-validation1. 3.3 Summary and Generative Process In summary, the following steps are applied to train a DSHMM: 1. Train a distributional semantic model on a large, domain-general corpus. 2. Preprocess and generate contextualized vectors of event heads and arguments in the small corpus in the target domain. 3. Train the DSHMM using the EM algorithm. The formal generative process is as follows: 1. Draw categorical distributions P E init; P E, P S, P H lemm (one per event state); P A lemm (one per slot state) from Dirichlet priors. 2. Draw multivariate Gaussians P H sem, P A sem for each event and slot state, respectively. 1The topic cluster splits and the hyperparameter settings are available at http://www.cs.toronto.edu/ ˜jcheung/dshmm/dshmm.html. 3. Generate the documents, clause by clause. Generating a clause at position t consists of these steps: 1. Generate the event state Et ∼P E (or P E init). 2. Generate the event head components lemm(Ht) ∼P H lemm, sem(Ht) ∼P H sem. 3. Generate a number of slot states Sta ∼P S. 4. For each slot, generate the argument components lemm(Ata) ∼P A lemm, sem(Ata) ∼ P A sem. 4 Experiments We trained the distributional semantic models using the Annotated Gigaword corpus (Napoles et al., 2012), which has been automatically preprocessed and is based on Gigaword 5th edition. This corpus contains almost ten million news articles and more than 4 billion tokens. We used those articles marked as “stories” — the vast majority of them. We modelled the 50,000 most common lemmata as target words, and the 3,000 most common lemmata as context words. We then trained DSHMM and conducted our evaluations on the TAC 2010 guided summarization data set (Owczarzak and Dang, 2010). Lemmatization and extraction of event heads and arguments are done by preprocessing with the Stanford CoreNLP tool suite (Toutanova et al., 2003; de Marneffe et al., 2006). This data set contains 46 topic clusters of 20 articles each, grouped into five topic categories or domains. For example, one topic cluster in the ATTACK category is about the Columbine Massacre. Each topic cluster contains eight human-written “model” summaries (“model” here meaning a gold standard). Half of the articles and model summaries in a topic cluster are used in the guided summarization task, and the rest are used in the update summarization task. We chose this data set because it allows us to conduct various domain-modelling evaluations. First, templates for the domains are provided, and the model summaries are annotated with slots from the template, allowing for an intrinsic evaluation of slot induction (Section 5). Second, it contains multiple domain instances for each of the domains, and each domain instance comes annotated with eight model summaries, allowing for an extrinsic evaluation of our system (Section 6). 396 5 Guided Summarization Slot Induction We first evaluated our models on their ability to produce coherent clusters of entities belonging to the same slot, adopting the experimental procedure of Cheung et al. (2013). As part of the official TAC evaluation procedure, model summaries were manually segmented into contributors, and labelled with the slot in the TAC template that the contributor expresses. For example, a summary fragment such as On 20 April 1999, a massacre occurred at Columbine High School is segmented into the contributors: (On 20 April 1999, WHEN); (a massacre occurred, WHAT); and (at Columbine High School, WHERE). In the slot induction evaluation, this annotation is used as follows. First, the maximal noun phrases are extracted from the contributors and clustered based on the TAC slot of the contributor. These clusters of noun phrases then become the gold standard clusters against which automatic systems are compared. Noun phrases are considered to be matched if the lemmata of their head words are the same and they are extracted from the same summary. This accounts for the fact that human annotators often only label the first occurrence of a word that belongs to a slot in a summary, and follows the standard evaluation procedure in previous information extraction tasks, such as MUC-4. Pronouns and demonstratives are ignored. This extraction process is noisy, because the meaning of some contributors depends on an entire verb phrase, but we keep this representation to allow a direct comparison to previous work. Because we are evaluating unsupervised systems, the clusters produced by the systems are not labelled, and must be matched to the gold standard clusters. This matching is performed by mapping to each gold cluster the best system cluster according to F1. The same system cluster may be mapped multiple times, because several TAC slots can overlap. For example, in the NATURAL DISASTERS domain, an earthquake may fit both the WHAT slot as well as the CAUSE slot, because it generated a tsunami. We trained a DSHMM separately for each of the five domains with different semantic models, tuning hyperparameters by two-fold cross-validation. We then extracted noun phrase clusters from the model summaries according to the slot labels produced by running the Viterbi algorithm on them. Method P R F1 HMM w/o semantics 13.8 64.1 22.6* DSHMM w/ SIMPLE 20.9 27.5 23.7 DSHMM w/ E&P 20.7 27.9 23.8 PROFINDER 23.7 25.0 24.3 DSHMM w/ M&L+ 19.7 36.3 25.6* DSHMM w/ M&L× 22.1 33.2 26.5* Table 2: Slot induction results on the TAC guided summarization data set. Asterisks (*) indicate that the model is statistically significantly different from PROFINDER in terms of F1 at p < 0.05. Results We compared DSHMM to two baselines. Our first baseline is PROFINDER, a stateof-the-art template inducer which Cheung et al. (2013) showed to outperform the previous heuristic clustering method of Chambers and Jurafsky (2011). Our second baseline is our DSHMM model, without the semantic vector component, (HMM w/o semantics). To calculate statistical significance, we use the paired bootstrap method, which can accommodate complex evaluation metrics like F1 (Berg-Kirkpatrick et al., 2012). Table 2 shows that performance of the models. Overall, PROFINDER significantly outperforms the HMM baseline, but not any of the DSHMM models by F1. DSHMM with contextualized semantic vectors achieves the highest F1s, and are significantly better than PROFINDER. All of the differences in precision and recall between PROFINDER and the other models are significant. The baseline HMM model has highly imbalanced precision and recall. We think this is because the model is unable to successfully produce coherent clusters, so the best-case mapping procedure during evaluation picked large clusters that have high recall. PROFINDER has slightly higher precision, which may be due to its non-parametric splitmerge heuristic. We plan to investigate whether this learning method could improve DSHMM’s performance further. Importantly, the contextualization of the vectors seems to be beneficial, at least with the M&L component-wise operators. In the next section, we show that the improvement from contextualization transfers to multidocument summarization results. 397 6 Multi-document Summarization: An Extrinsic Evaluation We next evaluated our models extrinsically in the setting of extractive, multi-document summarization. To use the trained DSHMM for extractive summarization, we need a decoding procedure for selecting sentences in the source text to include in the summary. Inspired by the KLSUM and HIERSUM methods of Haghighi and Vanderwende (2009), we develop a criterion based on KullbackLeibler (KL) divergence between distributions estimated from the source text, and those estimated from the summary. The assumption here is that these distributions should match in a good summary. We describe two methods to use this criterion: a basic unsupervised method (Section 6.1), and a supervised variant that makes use of indomain summaries to learn the salient slots and events in the domain (Section 6.2). 6.1 A KL-based Criterion There are four main component distributions from our model that should be considered during extraction: (1) the distribution of events, (2) the distribution of slots, (3) the distribution of event heads, and (4) the distribution of arguments. We estimate (1) as the context-independent probability of being in a certain event state, which can be calculated using the Inside-Outside algorithm. Given a collection of documents D which make up the source text, the distribution of event topics ˆP E(E) is estimated as: ˆP E(E = e) = 1 Z X d∈D X t Int(e)Outt(e) P(d) , (9) where Int(e) and Outt(e) are the values of the inside and outside trellises at timestep t for some event state e, and Z is a normalization constant. The distribution for a set of sentences in a candidate summary, ˆQE(E), is identical, except the summation is over the clauses in the candidate summary. Slot distributions ˆP S(S) and ˆQS(S) (2) are defined analogously, where the summation occurs along all the slot variables. For (3) and (4), we simply use the MLE estimates of the lemma emissions, where the estimates are made over the source text and the candidate summary instead of over the entire training set. All of the candidate summary distributions (i.e., the “ ˆQ distributions”) are smoothed by a small amount, so that the KL-divergence is always finite. Our KL criterion combines the above components linearly, weighting the lemma distributions by the probability of their respective event or slot state: KLScore = (10) DKL( ˆP E|| ˆQE) + DKL( ˆP S|| ˆQS) + NE X e=1 ˆP E(e)DKL( ˆP H(H|e)|| ˆQH(H|e)) + NS X s=1 ˆP S(s)DKL( ˆP A(A|s)|| ˆQA(A|s)) To produce a summary, sentences from the source text are greedily added such that KLScore is minimized at each step, until the desired summary length is reached, discarding sentences with fewer than five words. 6.2 Supervised Learning The above unsupervised method results in summaries that closely mirror the source text in terms of the event and slot distributions, but this ignores the fact that not all such topics should be included in a summary. It also ignores genrespecific, stylistic considerations about characteristics of good summary sentences. For example, Woodsend and Lapata (2012) find several factors that indicate sentences should not be included in an extractive summary, such as the presence of personal pronouns. Thus, we implemented a second method, in which we modify the KL criterion above by estimating ˆP E and ˆP S from other model summaries that are drawn from the same domain (i.e. topic category), except for those summaries that are written for the specific topic cluster to be used for evaluation. 6.3 Method and Results We used the best performing models from the slot induction task and the above unsupervised and supervised methods based on KL-divergence to produce 100-word summaries of the guided summarization source text clusters. We did not compare against PROFINDER, as its structure is different and would have required a different procedure than the KL-criterion we developed above. As shown in the previous evaluation, however, the HMM baseline without semantics and DSHMM with SIMPLE perform similarly in terms of F1, 398 Method ROUGE-1 ROUGE-2 ROUGE-SU4 unsup. sup. unsup. sup. unsup. sup. Leading baseline 28.0 − 5.39 − 8.6 − HMM w/o semantics 32.3 32.7 6.45 6.49 10.1 10.2 DSHMM w/ SIMPLE 32.1 32.7 5.81 6.50 9.8 10.2 DSHMM w/ M&L+ 32.1 33.4 6.27 6.82 10.0 10.6 DSHMM w/ M&L× 32.4 34.3* 6.35 7.11ˆ 10.2 11.0* DSHMM w/ E&P 32.8 33.8* 6.38 7.31* 10.3 10.8* Table 3: TAC 2010 summarization results by three settings of ROUGE. Asterisks (*) indicate that the model is statistically significantly better than the HMM model without semantics at a 95% confidence interval, a caret ˆ indicates that the value is marginally so. so we consider these competitive baselines. We did not evaluate with the update summarization task, because our method has not been adapted to it. For the evaluation measure, we used the standard ROUGE suite of automatic evaluation measures (Lin, 2004). Note that the evaluation conditions of TAC 2010 are different, and thus those results are not directly comparable to ours. For instance, top performing systems in TAC 2010 make use of manually constructed lists of entities known to fit the slots in the provided templates and sample topic statements, which our method automatically learns. We include the leading baseline results from the competition as a point of reference, as it is a well-known and non-trivial one for news articles. This baseline summary consists of the leading sentences from the most recent document in the source text cluster up to the word length limit. Table 3 shows the summarization results for the three most widely-used settings of ROUGE. All of our models outperform the leading baseline by a large margin, demonstrating the effective of the KL-criterion. In terms of unsupervised performance, all of our models perform similarly. Because the unsupervised method mimics the distributions in the source text at all levels, the method may negate the benefit of learning and simply produce summaries that match the source text in the word distributions, thus being an approximation of KLSUM. Looking at the supervised results, however, the semantic vector models show clear gains in ROUGE, whereas the baseline method does not obtain much benefit from supervision. As in the previous evaluation, the models with contextualized semantic vectors provide the best performance. M&L× performs very well, as in slot induction, but E&P also performs well, unlike in the previous evaluation. This result reinforces the importance of the contextualization procedure for distributional semantic models. Analysis To better understand what is gained by supervision using in-domain summaries, we analyzed the best performing M&L× model’s output summaries for one document cluster from each domain. For each event state, we calculated the ratio ˆP E summ(e)/ ˆP E source(e), for the probability of an event state e as estimated from the training summaries and the the source text respectively. Likewise, we calculated ˆP S summ(s)/ ˆP S source(s) for the slot states. This ratio indicates the change in state’s probability after supervision; the greater the ratio, the more preferred that state becomes after training. We selected the most preferred and dispreferred event and slot for each document cluster, and took the three most probable lemmata from the associated lemma distribution (Table 4). It seems that supervision is beneficial because it picks out important event heads and arguments in the domain, such as charge, trial, and murder in the TRIALS domain. It also helps the summarizer avoid semantically generic words (be or have), pronouns, quotatives, and common but irrelevant words (home, city, restaurant in TRIALS). 7 Conclusion We have shown that contextualized distributional semantic vectors can be successfully integrated into a generative probabilistic model for domain modelling, as demonstrated by improvements in slot induction and multi-document summarization. The effectiveness of our model stems from the use of a large domain-general corpus to train the distributional semantic vectors, and the implicit syntactic and word sense information pro399 Domain Event Heads Slot Arguments + − + − ATTACKS say2, cause, doctor say2, be, have attack, hostage, troops he, it, they TRIALS charge, trial, accuse say, be, have prison, murder, charge home, city, restaurant RESOURCES reduce, increase, university say, be, have government, effort, program he, they, it DISASTERS flood, strengthen, engulf say, be, have production, statoil, barrel he, it, they HEALTH be, department, have say, do, make food, product, meat she, people, way Table 4: Analysis of the most probable event heads and arguments in the most preferred (+) and dispreferred (−) events and slots after supervised training. vided by the contextualization process. Our approach is modular, and allows principled training of the probabilistic model using standard techniques. While we have focused on the overall clustering of entities and the distribution of event and slot topics in this work, we would also like to investigate discourse modelling and content structuring. Finally, our work shows that the application of distributional semantics to NLP tasks need not be confined to lexical disambiguation. We would like to see modern distributional semantic methods incorporated into an even greater variety of applications. Acknowledgments This work is supported by the Natural Sciences and Engineering Research Council of Canada. References Regina Barzilay and Lillian Lee. 2004. Catching the drift: Probabilistic content models, with applications to generation and summarization. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004. Taylor Berg-Kirkpatrick, David Burkett, and Dan Klein. 2012. An empirical investigation of statistical significance in NLP. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, Jeju Island, Korea, July. Association for Computational Linguistics. 2The event head say happens to appear in both the most preferred and dispreferred events in the ATTACKS domain. David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. The Journal of Machine Learning Research, 3. Asli Celikyilmaz and Dilek Hakkani-Tur. 2010. A hybrid hierarchical model for multi-document summarization. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 815–824, Uppsala, Sweden, July. Association for Computational Linguistics. Nathanael Chambers and Dan Jurafsky. 2011. Template-based information extraction without the templates. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 976– 986, Portland, Oregon, USA, June. Association for Computational Linguistics. Jackie Chi Kit Cheung and Gerald Penn. 2012. Evaluating distributional models of semantics for syntactically invariant inference. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 33–43, Avignon, France, April. Association for Computational Linguistics. Jackie Chi Kit Cheung, Hoifung Poon, and Lucy Vanderwende. 2013. Probabilistic frame induction. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stephen Clark, Bob Coecke, and Mehrnoosh Sadrzadeh. 2008. A compositional distributional model of meaning. In Proceedings of the Second Quantum Interaction Symposium (QI-2008). Marie-Catherine de Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In In LREC 2006. Georgiana Dinu and Mirella Lapata. 2010. Measuring distributional similarity in context. In Proceed400 ings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 1162–1172. Micha Elsner, Joseph Austerweil, and Eugene Charniak. 2007. A unified local and global model for discourse coherence. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, Rochester, New York, April. Association for Computational Linguistics. Katrin Erk and Sebastian Pad´o. 2008. A structured vector space model for word meaning in context. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 897– 906. Elena Filatova, Vasileios Hatzivassiloglou, and Kathleen McKeown. 2006. Automatic creation of domain templates. In Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions, pages 207–214, Sydney, Australia, July. Association for Computational Linguistics. Pascale Fung and Grace Ngai. 2006. One story, one flow: Hidden markov story models for multilingual multidocument summarization. ACM Transactions on Speech and Language Processing (TSLP), 3(2):1–16. Pascale Fung, Grace Ngai, and Chi-Shun Cheung. 2003. Combining optimal clustering and hidden markov models for extractive summarization. In Proceedings of the ACL 2003 Workshop on Multilingual Summarization and Question Answering, pages 21–28, Sapporo, Japan, July. Association for Computational Linguistics. Edward Grefenstette and Mehrnoosh Sadrzadeh. 2011. Experimental support for a categorical compositional distributional model of meaning. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1394– 1404, Edinburgh, Scotland, UK., July. Association for Computational Linguistics. Amit Gruber, Michael Rosen-Zvi, and Yair Weiss. 2007. Hidden topic markov models. Artificial Intelligence and Statistics (AISTATS). Aria Haghighi and Lucy Vanderwende. 2009. Exploring content models for multi-document summarization. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 362–370, Boulder, Colorado, June. Association for Computational Linguistics. Chin Y. Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Stan Szpakowicz and Marie-Francine Moens, editors, Text Summarization Branches Out: Proceedings of the ACL-04 Workshop, pages 74–81, Barcelona, Spain, July. Association for Computational Linguistics. Annie Louis and Ani Nenkova. 2012. A coherence model based on syntactic patterns. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, Jeju Island, Korea, July. Association for Computational Linguistics. Jeff Mitchell and Mirella Lapata. 2008. Vector-based models of semantic composition. In Proceedings of ACL-08: HLT, pages 236–244. 1992. Proceedings of the Fourth Message Understanding Conference (MUC-4). Morgan Kaufmann. Courtney Napoles, Matthew Gormley, and Benjamin Van Durme. 2012. Annotated gigaword. In Proceedings of the NAACL-HLT Joint Workshop on Automatic Knowledge Base Construction & Web-scale Knowledge Extraction (AKBC-WEKEX), pages 95– 100. Dirk Ormoneit and Volker Tresp. 1995. Improved gaussian mixture density estimates using bayesian penalty terms and network averaging. In Advances in Neural Information Processing, pages 542–548. Karolina Owczarzak and Hoa T. Dang. 2010. TAC 2010 guided summarization task guidelines. Gerard Salton, Anita Wong, and Chung-Shu Yang. 1975. A vector space model for automatic indexing. Communications of the ACM, 18(11):613–620. Roger C. Schank and Robert P. Abelson. 1977. Scripts, Plans, Goals, and Understanding: An Inquiry Into Human Knowledge Structures. Lawrence Erlbaum, July. Stefan Thater, Hagen F¨urstenau, and Manfred Pinkal. 2011. Word meaning in context: A simple and effective vector model. In Proceedings of IJCNLP. Kristina Toutanova, Dan Klein, Christoper D. Manning, and Yoram Singer. 2003. Feature-rich part-ofspeech tagging with a cyclic dependency network. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language TechnologyVolume 1, page 180. Peter D. Turney and Patrick Pantel. 2010. From frequency to meaning: Vector space models of semantics. Journal of Artificial Intelligence Research, 37:141–188. Hanna M. Wallach. 2008. Structured topic models for language. Doctoral dissertation, University of Cambridge. Kristian Woodsend and Mirella Lapata. 2012. Multiple aspect summarization using integer linear programming. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, Jeju Island, Korea, July. Association for Computational Linguistics. 401
2013
39
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 32–42, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Modelling Annotator Bias with Multi-task Gaussian Processes: An Application to Machine Translation Quality Estimation Trevor Cohn and Lucia Specia Department of Computer Science University of Sheffield Sheffield, United Kingdom {t.cohn,l.specia}@sheffield.ac.uk Abstract Annotating linguistic data is often a complex, time consuming and expensive endeavour. Even with strict annotation guidelines, human subjects often deviate in their analyses, each bringing different biases, interpretations of the task and levels of consistency. We present novel techniques for learning from the outputs of multiple annotators while accounting for annotator specific behaviour. These techniques use multi-task Gaussian Processes to learn jointly a series of annotator and metadata specific models, while explicitly representing correlations between models which can be learned directly from data. Our experiments on two machine translation quality estimation datasets show uniform significant accuracy gains from multi-task learning, and consistently outperform strong baselines. 1 Introduction Most empirical work in Natural Language Processing (NLP) is based on supervised machine learning techniques which rely on human annotated data of some form or another. The annotation process is often time consuming, expensive, and prone to errors; moreover there is often considerable disagreement amongst annotators. In general, the predominant perspective to deal with these data annotation issues in previous work has been that there is a single underlying ground truth, and that the annotations collected are noisy and/or biased samples of this. The challenge is then one of quality control, in order to process the data by filtering, averaging or similar to distil the truth. We posit that this perspective is too limiting, especially with respect to linguistic data, where each individual’s idiolect and linguistic background can give rise to many different – and yet equally valid – truths. Particularly in highly subjective annotation tasks, the differences between annotators cannot be captured by simple models such as scaling all instances of a certain annotator by a factor. They can originate from a number of nuanced aspects. This is the case, for example, of annotations on the quality of sentences generated using machine translation (MT) systems, which are often used to build quality estimation models (Blatz et al., 2004; Specia et al., 2009) – our application of interest. In addition to annotators’ own perceptions and expectations with respect to translation quality, a number of factors can affect their judgements on specific sentences. For example, certain annotators may prefer translations produced by rulebased systems as these tend to be more grammatical, while others would prefer sentences produced by statistical systems with more adequate lexical choices. Likewise, some annotators can be biased by the complexity of the source sentence: lengthy sentences are often (subconsciously) assumed to be of low quality by some annotators. An extreme case is the judgement of quality through post-editing time: annotators have different typing speeds, as well as levels of expertise in the task of post-editing, proficiency levels in the language pair, and knowledge of the terminology used in particular sentences. These variations result in time measurements that are not comparable across annotators. Thus far, the use of post-editing time has been done on an per-annotator basis (Specia, 2011), or simply averaged across multiple translators (Plitt and Masselot, 2010), both strategies far from ideal. Overall, these myriad of factors affecting quality judgements make the modelling of multiple annotators a very challenging problem. This problem is exacerbated when annotations are provided by non-professional annotators, e.g., through crowdsourcing – a common strategy used 32 to make annotation cheaper and faster, however at the cost of less reliable outcomes. Most related work on quality assurance for data annotation has been developed in the context of crowdsourcing. Common practices include filtering out annotators who substantially deviate from a gold-standard set or present unexpected behaviours (Raykar et al., 2010; Raykar and Yu, 2012), or who disagree with others using, e.g., majority or consensus labelling (Snow et al., 2008; Sheng et al., 2008). Another relevant strand of work aims to model legitimate, systematic biases in annotators (including both non-experts and experts), such as the fact that some annotators tend to be more negative than others, and that some annotators use a wider or narrower range of values (Flach et al., 2010; Ipeirotis et al., 2010). However, with a few exceptions in Computer Vision (e.g., Whitehill et al. (2009), Welinder et al. (2010)), existing work disregard metadata and its impact on labelling. In this paper we model the task of predicting the quality of sentence translations using datasets that have been annotated by several judges with different levels of expertise and reliability, containing translations from a variety of MT systems and on a range of different types of sentences. We address this problem using multi-task learning in which we learn individual models for each context (the task, incorporating the annotator and other metadata: translation system and the source sentence) while also modelling correlations between tasks such that related tasks can mutually inform one another. Our use of multi-task learning allows the modelling of a diversity of truths, while also recognising that they are rarely independent of one another (annotators often agree) by explicitly accounting for inter-annotator correlations. Our approach is based on Gaussian Processes (GPs) (Rasmussen and Williams, 2006), a kernelised Bayesian non-parametric learning framework. We develop multi-task learning models by representing intra-task transfer simply and explicitly as part of a parameterised kernel function. GPs are an extremely flexible probabilistic framework and have been successfully adapted for multi-task learning in a number of ways, e.g., by learning multi-task correlations (Bonilla et al., 2008), modelling per-task variance (Groot et al., 2011) or perannotator biases (Rogers et al., 2010). Our method builds on the work of Bonilla et al. (2008) by explicitly modelling intra-task transfer, which is learned automatically from the data, in order to robustly handle outlier tasks and task variances. We show in our experiments on two translation quality datasets that these multi-task learning strategies are far superior to training individual per-task models or a single pooled model, and moreover that our multi-task learning approach can achieve similar performance to these baselines using only a fraction of the training data. In addition to showing empirical performance gains on quality estimation applications, an important contribution of this paper is in introducing Gaussian Processes to the NLP community,1 a technique that has great potential to further performance in a wider range of NLP applications. Moreover, the algorithms proposed herein can be adapted to improve future annotation efforts, and subsequent use of noisy crowd-sourced data. 2 Quality Estimation Quality estimation (QE) for MT aims at providing an estimate on the quality of each translated segment – typically a sentence – without access to reference translations. Work in this area has become increasingly popular in recent years as a consequence of the widespread use of MT among realworld users such as professional translators. Examples of applications of QE include improving post-editing efficiency by filtering out low quality segments which would require more effort and time to correct than translating from scratch (Specia et al., 2009), selecting high quality segments to be published as they are, without post-editing (Soricut and Echihabi, 2010), selecting a translation from either an MT system or a translation memory for post-editing (He et al., 2010), selecting the best translation from multiple MT systems (Specia et al., 2010), and highlighting subsegments that need revision (Bach et al., 2011). QE is generally addressed as a machine learning task using a variety of linear and kernel-based regression or classification algorithms to induce models from examples of translations described through a number of features and annotated for quality. For an overview of various algorithms and features we refer the reader to the WMT12 shared task on QE (Callison-Burch et al., 2012). While initial work used annotations derived 1We are not strictly the first, Polajnar et al. (2011) used GPs for text classification. 33 from automatic MT evaluation metrics (Blatz et al., 2004) such as BLEU (Papineni et al., 2002) at training time, it soon became clear that human labels result in significantly better models (Quirk, 2004). Current work at sentence level is thus based on some form of human supervision. As typical of subjective annotation tasks, QE datasets should contain multiple annotators to lead to models that are representative. Therefore, work in QE faces all common issues regarding variability in annotators’ judgements. The following are a few other features that make our datasets particularly interesting: • In order to minimise annotation costs, translation instances are often spread among annotators, such that each instance is only labelled by one or a few judges. In fact, for a sizeable dataset (thousands of instances), the annotation of a complete dataset by a single judge may become infeasible. • It is often desirable to include alternative translations of source sentences produced by multiple MT systems, which requires multiple annotators for unbiased judgements, particularly for labels such as post-editing time (a translation seen a second time will require less editing effort). • For crowd-sourced annotations it is often impossible to ensure that the same annotators will label the same subset of cases. These features – which are also typical of many other linguistic annotation tasks – make the learning process extremely challenging. Learning models from datasets annotated by multiple annotators remains an open challenge in QE, as we show in Section 4. In what follows, we present our QE datasets in more detail. 2.1 Datasets We use two freely available QE datasets to experiment with the techniques proposed in this paper:2 WMT12: This dataset was distributed as part of the WMT12 shared task on QE (Callison-Burch et al., 2012). It contains 1, 832 instances for training, and 422 for test. The English source sentences are a subset of WMT09-12 test sets. The Spanish MT outputs were created using a standard PBSMT Moses engine. Each instance was annotated with post-editing effort scores from highest 2Both datasets can be downloaded from http://www. dcs.shef.ac.uk/˜lucia/resources.html. effort (score 1) to lowest effort (score 5), where each score identifies an estimated percentage of the MT output that needs to be corrected. The post-editing effort scores were produced independently by three professional translators based on a previously post-edited translation by a fourth translator. In an attempt to accommodate for systematic biases among annotators, the final effort score was computed as the weighted average between the three PE-effort scores, with more weight given to the judges with higher standard deviation from their own mean score. This resulted in scores spread more evenly in the [1, 5] range. WPTP12: This dataset was distributed by Koponen et al. (2012). It contains 299 English sentences translated into Spanish using two or more of eight MT systems randomly selected from all system submissions for WMT11 (Callison-Burch et al., 2011). These MT systems range from online and customised SMT systems to commercial rule-based systems. Translations were post-edited by humans while time was recorded. The labels are the number of seconds spent by a translator editing a sentence normalised by source sentence length. The post-editing was done by eight native speakers of Spanish, including five professional translators and three translation students. Only 20 translations were edited by all eight annotators, with the remaining translations randomly distributed amongst them. The resulting dataset contains 1, 624 instances, which were randomly split into 1, 300 for training and 300 for test. According to the analysis in (Koponen et al., 2012), while on average certain translators were found to be faster than others, their speed in post-editing individual sentences varies considerably, i.e., certain translators are faster at certain sentences. To our knowledge, no previous work has managed to successfully model the prediction of post-editing time from datasets with multiple annotators. 3 Gaussian Process Regression Machine learning models for quality estimation typically treat the problem as regression, seeking to model the relationship between features of the text input and the human quality judgement as a continuous response variable. Popular choices include Support Vector Machines (SVMs), which have been shown to perform well for quality estimation (Callison-Burch et al., 2012) using nonlinear kernel functions such as radial basis func34 tions. In this paper we consider Gaussian Processes (GP) (Rasmussen and Williams, 2006), a probabilistic machine learning framework incorporating kernels and Bayesian non-parametrics, widely considered state-of-the-art for regression. Despite this GPs have not been used widely to date in statistical NLP. GPs are particularly suitable for modelling QE for a number of reasons: 1) they explicitly model uncertainty, which is rife in QE datasets; 2) they allow fitting of expressive kernels to data, in order to modulate the effect of features of varying usefulness; and 3) they can naturally be extended to model correlated tasks using multitask kernels. We now give a brief overview of GPs, following Rasmussen and Williams (2006). In our regression task3 the data consists of n pairs D = {(xi, yi)}, where xi ∈RF is a Fdimensional feature vector and yi ∈R is the response variable. Each instance is a translation and the feature vector encodes its linguistic features; the response variable is a numerical quality judgement: post editing time or likert score. As usual, the modelling challenge is to automatically predict the value of y based on the x for unseen test input. GP regression assumes the presence of a latent function, f : RF →R, which maps from the input space of feature vectors x to a scalar. Each response value is then generated from the function evaluated at the corresponding data point, yi = f(xi) + η, where η ∼N(0, σ2 n) is added white-noise. Formally f is drawn from a GP prior, f(x) ∼GP 0, k(x, x′)  , which is parameterised by a mean (here, 0) and a covariance kernel function k(x, x′). The kernel function represents the covariance (i.e., similarities in the response) between pairs of data points. Intuitively, points that are in close proximity should have high covariance compared to those that are further apart, which constrains f to be a smoothly varying function of its inputs. This intuition is embodied in the squared exponential kernel (a.k.a. radial basis function or Gaussian), k(x, x′) = σ2 f exp  −1 2(x −x′)T A−1(x −x′)  (1) where σ2 f is a scaling factor describing the overall levels of variance, and A = diag(a) is a diagonal 3Our approach generalises to classification, ranking (ordinal regression) or various other training objectives, including mixtures of objectives. In this paper we use regression for simplicity of exposition and implementation. matrix of length scales, encoding the smoothness of functions f with respect to each feature. Nonuniform length scales allow for different degrees of smoothness of f in each dimension, such that e.g., for unimportant features f is relatively flat whereas for very important features f is jagged, such that a small change in the feature value has a large effect. When the values of a are learned automatically from data, as we do herein, this is referred to as the automatic relevance determination (ARD) kernel. Given the generative process defined above, we formulate prediction as Bayesian inference under the posterior, namely p(y∗|x∗, D) = Z f p(y∗|x∗, f)p(f|D) where x∗is a test input and y∗is its response value. The posterior p(f|D) reflects our updated belief over possible functions after observing the training set D, i.e., f should pass close to the response values for each training instance (but need not fit exactly due to additive noise). This is balanced against the smoothness constraints that arise from the GP prior. The predictive posterior can be solved analytically, resulting in y∗∼N kT ∗(K + σ2 nI)−1y, (2) k(x∗, x∗) −kT ∗(K + σ2 nI)−1k∗  where k∗= [k(x∗, x1) k(x∗, x2) · · · k(x∗, xn)]T are the kernel evaluations between the test point and the training set, and {Kij = k(xi, xj)} is the kernel (gram) matrix over the training points. Note that the posterior in Eq. 2 includes not only the expected response (the mean) but also the variance, encoding the model’s uncertainty, which is important for integration into subsequent processing, e.g., as part of a larger probabilistic model. GP regression also permits an analytic formulation of the marginal likelihood, p(y|X) = R f p(y|X, f)p(f), which can be used for model training (X are the training inputs). Specifically, we can derive the gradient of the (log) marginal likelihood with respect to the model hyperparameters (i.e., a, σn, σs etc.) and thereby find the type II maximum likelihood estimate using gradient ascent. Note that in general the marginal likelihood is non-convex in the hyperparameter values, and consequently the solutions may only be locally optimal. Here we bootstrap the learning of complex models with many hyperparameters by initialising 35 with the (good) solutions found for simpler models, thereby avoiding poor local optima. We refer the reader to Rasmussen and Williams (2006) for further details. At first glance GPs resemble SVMs, which also admit kernels such as the popular squared exponential kernel in Eq. 1. The key differences are that GPs are probabilistic models and support exact Bayesian inference in the case of regression (approximate inference is required for classification (Rasmussen and Williams, 2006)). Moreover GPs provide greater flexibility in fitting the kernel hyperparameters even for complex composite kernels. In typical usage, the kernel hyperparameters for an SVM are fit using held-out estimation, which is inefficient and often involves tying together parameters to limit the search complexity (e.g., using a single scale parameter in the squared exponential). Multiple-kernel learning (G¨onen and Alpaydın, 2011) goes some way to addressing this problem within the SVM framework, however this technique is limited to reweighting linear combinations of kernels and has high computational complexity. 3.1 Multi-task Gaussian Process Models Until now we have considered a standard regression scenario, where each training point is labelled with a single output variable. In order to model multiple different annotators jointly, i.e., multitask learning, we need to extend the model to handle many tasks. Conceptually, we can consider the multi-task model drawing a latent function for each task, fm(x), where m ∈1, ..., M is the task identifier. This function is then used to explain the response values for all the instances for that task (subject to noise). Importantly, for multi-task learning to be of benefit, the prior over {fm} must correlate the functions over different tasks, e.g., by imposing similarity constraints between the values for fm(x) and fm′(x). We can consider two alternative perspectives for framing the multi-task learning problem: either isotopic where we associate each input point x with a vector of outputs, y ∈RM, one for each of the M tasks; or heterotopic where some of the outputs are missing, i.e., tasks are not constrained to share the same data points (Alvarez et al., 2011). Given the nature of our datasets, we opted for the heterotopic approach, which can handle both singly annotated and multiply annotated data. This can be implemented by augmenting each input point with an additional task identity feature, which is paired with a single y response, and integrated into a GP model with the standard training and inference algorithms.4 In moving to a task-augmented data representation, we need to revise our kernel function. We use a separable multi-task kernel (Bonilla et al., 2008; Alvarez et al., 2011) of the form k (x, d), (x′, d′)  = kdata(x, x′)Bd,d′ , (3) where kdata(x, x′) is a standard kernel over the input points, typically a squared exponential (see Eq. 1), and B ∈RD×D is a positive semi-definite matrix encoding task covariances. We develop a series of increasingly complex choices for B, which we compare empirically in Section 4.2: Independent The simplest case is where B = I, i.e., all pairs of different tasks have zero covariance. This corresponds to independent modelling of each task, although all models share the same data kernel, so this setting is not strictly equivalent to independent training with independent pertask data kernels (with different hyperparameters). Similarly, we might choose to use a single noise variance, σ2 n, or an independent noise variance hyperparameter per task. Pooled Another extreme is B = 1, which ignores the task identity, corresponding to pooling the multi-task data into one large set. Groot et al. (2011) present a method for applying GPs for modelling multi-annotator data using this pooling kernel with independent per-task noise terms. They show on synthetic data experiments that this approach works well at extracting the signal from noise-corrupted inputs. Combined A simple approach for B is a weighted combination of Independent and Pool, i.e., B = 1 + aI, where the hyperparameter a ≥0 controls the amount of inter-task transfer between each task and the global ‘pooled’ task.5 For dissimilar tasks, a high value of a allows each task to be modelled independently, while for more similar tasks low a allows the use of a large pool of 4Note that the separable kernel (Eq. 3) gives rise to block structured kernel matrices which permit various optimisations (Bonilla et al., 2008) to reduce the computational complexity of inference, e.g., the matrix inversion in Eq. 2. 5Note that larger values of a need not affect the overall magnitude of k, which can be down-scaled by the σ2 f factor in the data kernel (Eq. 1). 36 similar data. A scaled version of this kernel has been shown to correspond to mean regularisation in SVMs when combined with a linear data kernel (Evgeniou et al., 2006). A similar multi-task kernel was proposed by Daum´e III (2007), using a linear data kernel and a = 1, which has shown to result in excellent performance across a range of NLP problems. In contrast to these earlier approaches, we learn the hyperparameter a directly, fitting the relative amounts of inter- versus intratask transfer to the dataset. Combined+ We consider an extension to the Combined kernel, B = 1 + diag(a), ad ≥0 in which each task has a different hyperparameter modulating its independence from the global pool. This additional flexibility can be used, e.g., to allow individual outlier annotators to be modelled independently of the others, by assigning a high value to ad. In contrast, Combined ties together the parameters for all tasks, i.e., all annotators are assumed to have similar quality in that they deviate from the mean to the same degree. 3.2 Integrating metadata The approaches above assume that the data is split into an unstructured set of M tasks, e.g., by annotator. However, it is often the case that we have additional information about each data instance in the form of metadata. In our quality estimation experiments we consider as metadata the MT system which produced the translation, and the identity of the source sentence being translated. Many other types of metadata, such as the level of experience of the annotator, could also be used. One way of integrating such metadata would be to define a separate task for every observed combination of metadata values, in which case we treat the metadata as a task descriptor. Doing so naively would however incur a significant penalty, as each task will have very few training instances resulting in inaccurate models, even with the inter-task kernel approaches defined above. We instead extend the task-level kernels to use the task descriptors directly to represent task correlations. Let B(i) be a square covariance matrix for the ith task descriptor of M, with a column and row for each value (e.g., annotator identity, translation system, etc.). We redefine the task level kernel using paired inputs (x, m), where m are the task descriptors, k (x, m), (x′, m′)  = kdata(x, x′) M Y i=1 B(i) mi,m′ i . This is equivalent to using a structured task-kernel B = B(1) ⊗B(3) ⊗· · · ⊗B(M) where ⊗is the Kronecker product. Using this formulation we can consider any of the above choices for B applied to each task descriptor. In our experiments we consider the Combined and Combined+ kernels, which allow the model to learn the relative importance of each descriptor in terms of independent modelling versus pooling the data. 4 Multi-task Quality Estimation 4.1 Experimental Setup Feature sets: In all experiments we use 17 shallow QE features that have been shown to perform well in previous work. These were used by a highly competitive baseline entry in the WMT12 shared task, and were extracted here using the system provided by that shared task.6 They include simple counts, e.g., the tokens in sentences, as well as source and target language model probabilities. Each feature was scaled to have zero mean and unit standard deviation on the training set. Baselines: The baselines use the SVM regression algorithm with radial basis function kernel and parameters γ, ϵ and C optimised through gridsearch and 5-fold cross validation on the training set. This is generally a very strong baseline: in the WMT12 QE shared task, only five out of 19 submissions were able to significantly outperform it, and only by including many complex additional features, tree kernels, etc. We also present µ, a trivial baseline based on predicting for each test instance the training mean (overall, and for specific tasks). GP: All GP models were implemented using the GPML Matlab toolbox.7 Hyperparameter optimisation was performed using conjugate gradient ascent of the log marginal likelihood function, with up to 100 iterations. The simpler models were initialised with all hyperparameters set to one, while more complex models were initialised using the 6The software used to extract these (and other) features can be downloaded from http://www.quest. dcs.shef.ac.uk/ 7http://www.gaussianprocess.org/gpml/ code 37 Model MAE RMSE µ 0.8279 0.9899 SVM 0.6889 0.8201 Linear ARD 0.7063 0.8480 Squared exp. Isotropic 0.6813 0.8146 Squared exp. ARD 0.6680 0.8098 Rational quadratic ARD 0.6773 0.8238 Matern(5,2) 0.6772 0.8124 Neural network 0.6727 0.8103 Table 1: Single-task learning results on the WMT12 dataset, trained and evaluated against the weighted averaged response variable. µ is a baseline which predicts the training mean, SVM uses the same system as the WMT12 QE task, and the remainder are GP regression models with different kernels (all include additive noise). solution for a simpler model. For instance, models using ARD kernels were initialised from an equivalent isotropic kernel (which ties all the hyperparameters together), and independent per-task noise models were initialised from a single noise model. This approach was more reliable than random restarts in terms of accuracy and runtime efficiency. Evaluation: We evaluate predictive accuracy using two measures: mean absolute error, MAE = 1 N PN i=1 |yi −ˆyi| and root mean square error, RMSE = q 1 N PN i=1 (yi −ˆyi)2, where yi are the gold standard response values and ˆyi are the model predictions. 4.2 Results Our experiments aim to demonstrate the efficacy of GP regression, both the single task and multitask settings, compared to competitive baselines. WMT12: Single task We start by comparing GP regression with alternative approaches using the WMT12 dataset on the standard task of predicting a weighted mean quality rating (as it was done in the WMT12 QE shared task). Table 1 shows the results for baseline approaches and the GP models, using a variety of different kernels (see Rasmussen and Williams (2006) for details of the kernel functions). From this we can see that all models do much better than the mean baseline and that most of the GP models have lower error than the state-of-the-art SVM. In terms of kernels, the linear kernel performs comparatively worse than non-linear kernels. Overall the squared exponenModel MAE RMSE µ 0.8541 1.0119 Independent SVMs 0.7967 0.9673 EasyAdapt SVM 0.7655 0.9105 Independent 0.7061 0.8534 Pooled 0.7252 0.8754 Pooled & {N} 0.7050 0.8497 Combined 0.6966 0.8448 Combined & {N} 0.6975 0.8476 Combined+ 0.6975 0.8463 Combined+ & {N} 0.7046 0.8595 Table 2: Results on the WMT12 dataset, trained and evaluated over all three annotator’s judgements. Shown above are the training mean baseline µ, single-task learning approaches, and multitask learning models, with the columns showing macro average error rates over all three response values. All systems use a squared exponential ARD kernel in a product with the named taskkernel, and with added noise (per-task noise is denoted {N}, otherwise has shared noise). tial ARD kernel has the best performance under both measures of error, and for this reason we use this kernel in our subsequent experiments. WMT12: Multi-task We now turn to the multitask setting, where we seek to model each of the three annotators’ predictions. Table 2 presents the results. Note that here error rates are measured over all of the three annotators’ judgements, and consequently are higher than those measured against their average response in Table 1. For comparison, taking the predictions of the best model, Combined, in Table 2 and evaluating its averaged prediction has a MAE of 0.6588 vs. the averaged gold standard, significantly outperforming the best model in Table 1. There are a number of important findings in Table 2. First, the independently trained models do well, outperforming the pooled model with fixed noise, indicating that naively pooling the data is counter-productive and that there are annotatorspecific biases. Including per-annotator noise to the pooled model provides a boost in performance, however the best results are obtained using the Combined kernel which brings the strengths of both the independent and pooled settings. There are only minor differences between the different multi-task kernels, and in this case per-annotator noise made little difference. An explanation for the contradictory findings about the importance 38 of independent noise is that differences between annotators can already be explained by the MTL model using the multi-task kernel, and need not be explained as noise. The GP models significantly improve over the baselines, including an SVM trained independently and using the EasyAdapt method for multi-task learning (Daum´e III, 2007). While EasyAdapt showed an improvement over the independent SVM, it was a long way short of the GP models. A possible explanation is that in EasyAdapt the multi-task sharing parameter is set at a = 1, which may not be appropriate for the task. In contrast the Combined GP model learned a value of a = 0.01, weighting the value of pooling much more highly than independent training. A remaining question is how these approaches cope with smaller datasets, where issues of data sparsity become more prevalent. To test this, we trained single-task, pooled and multi-task models on randomly sub-sampled training sets of different sizes, and plot their error rates in Figure 1. As expected, for very small datasets pooling outperforms single task learning, however for modest sized datasets of ≥90 training instances pooling was inferior. For all dataset sizes multi-task learning is superior to the other approaches, making much better use of small and large training sets. The MTL model trained on 500 samples had an MAE of 0.7082 ± 0.0042, close to the best results from the full dataset in Table 2, despite using 1 9 as much data: here we use 1 3 as many training instances where each is singly (cf. triply) annotated. The same experiments run with multiplyannotated instances showed much weaker performance, presumably due to the more limited sample of input points and poorer fit of the ARD kernel hyperparameters. This finding suggests that our multi-task learning approach could be used to streamline annotation efforts by reducing the need for extensive multiple annotations. WPTP12 This dataset involves predicting the post-editing time for eight annotators, where we seek to test our model’s capability to use additional metadata. We model the logarithm of the per-word post-editing time, in order to make the response variable more comparable between annotators and across sentences, and generally more Gaussian in shape. In Table 3 immediately we can see that the baseline of predicting the training mean is very difficult to beat, and the trained 50 100 150 200 250 300 350 400 450 500 0.7 0.72 0.74 0.76 0.78 0.8 0.82 Training examples STL MTL Pooled Figure 1: Learning curve comparing MAE for different training methods on the WMT12 dataset, all using a squared exponential ARD data kernel and tied noise parameter. The MTL model uses the Combined task kernel. Each point is the average of 5 runs, and the error bars show ±1 s.d. systems often do worse. Partitioning the data by annotator (µA) gives the best baseline result, while there is less information from the MT system or sentence identity. Single-task learning performs only a little better than these baselines, although some approaches such as the naive pooling perform terribly. This suggests that the tasks are highly different to one another. Interestingly, adding the per-task noise models to the pooling approach greatly improves its performance. The multi-task learning methods performed best when using the annotator identity as the task descriptor, and less well for the MT system and sentence pair, where they only slightly improved over the baseline. However, making use of all these layers of metadata together gives substantial further improvements, reaching the best result with CombinedA,S,T . The effect of adding per-task noise to these models was less marked than for the pooled models, as in the WMT12 experiments. Inspecting the learned hyperparameters, the combined models learned a large bias towards independent learning over pooling, in contrast to the WMT12 experiments. This may explain the poor performance of EasyAdapt on this dataset. 5 Conclusion This paper presented a novel approach for learning from human linguistic annotations by explicitly training models of individual annotators (and possibly additional metadata) using multi-task learning. Our method using Gaussian Processes is flexible, allowing easy learning of inter-dependences between different annotators and other task meta39 Model MAE RMSE µ 0.5596 0.7053 µA 0.5184 0.6367 µS 0.5888 0.7588 µT 0.6300 0.8270 Pooled SVM 0.5823 0.7472 IndependentA SVM 0.5058 0.6351 EasyAdapt SVM 0.7027 0.8816 SINGLE-TASK LEARNING IndependentA 0.5091 0.6362 IndependentS 0.5980 0.7729 Pooled 0.5834 0.7494 Pooled & {N} 0.4932 0.6275 MULTI-TASK LEARNING: Annotator CombinedA 0.4815 0.6174 CombinedA & {N} 0.4909 0.6268 Combined+A 0.4855 0.6203 Combined+A & {N} 0.4833 0.6102 MULTI-TASK LEARNING: Translation system CombinedS 0.5825 0.7482 MULTI-TASK LEARNING: Sentence pair CombinedT 0.5813 0.7410 MULTI-TASK LEARNING: Combinations CombinedA,S 0.4988 0.6490 CombinedA,S & {NA,S} 0.4707 0.6003 Combined+A,S 0.4772 0.6094 CombinedA,S,T 0.4588 0.5852 CombinedA,S,T & {NA,S} 0.4723 0.6023 Table 3: Results on the WPTP12 dataset, using the log of the post-editing time per word as the response variable. Shown above are the training mean and SVM baselines, single-task learning and multi-task learning results (micro average). The subscripts denote the task split: annotator (A), MT system (S) and sentence identity (T). data. Our experiments showed how our approach outperformed competitive baselines on two machine translation quality regression problems, including the highly challenging problem of predicting post-editing time. In future work we plan to apply these techniques to new datasets, particularly noisy crowd-sourced data with much large numbers of annotators, as well as a wider range of task types and mixtures thereof (regression, ordinal regression, ranking, classification). We also have preliminary positive results for more advanced multi-task kernels, e.g., general dense matrices, which can induce clusters of related tasks. Our multi-task learning approach has much wider application. Models of individual annotators could be used to train machine translation systems to optimise an annotator-specific quality measure, or in active learning for corpus annotation, where the model can suggest the most appropriate instances for each annotator or the best annotator for a given instance. Further, our approach contributes to work based on cheap and fast crowdsourcing of linguistic annotation by minimising the need for careful data curation and quality control. Acknowledgements This work was funded by PASCAL2 Harvest Programme, as part of the QuEst project: http: //www.quest.dcs.shef.ac.uk/. The authors would like to thank Neil Lawerence and James Hensman for advice on Gaussian Processes, the QuEst participants, particularly Jos´e Guilherme Camargo de Souza and Eva Hassler, and the three anonymous reviewers. References Mauricio A. Alvarez, Lorenzo Rosasco, and Neil D. Lawrence. 2011. Kernels for vector-valued functions: A review. Foundations and Trends in Machine Learning, 4(3):195–266. Nguyen Bach, Fei Huang, and Yaser Al-Onaizan. 2011. Goodness: a method for measuring machine translation confidence. In the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 211– 219, Portland, Oregon. John Blatz, Erin Fitzgerald, George Foster, Simona Gandrabur, Cyril Goutte, Alex Kulesza, Alberto 40 Sanchis, and Nicola Ueffing. 2004. Confidence Estimation for Machine Translation. In the 20th International Conference on Computational Linguistics (Coling 2004), pages 315–321, Geneva. Edwin Bonilla, Kian Ming Chai, and Christopher Williams. 2008. Multi-task gaussian process prediction. In Advances in Neural Information Processing Systems (NIPS). Chris Callison-Burch, Philipp Koehn, Christof Monz, and Omar Zaidan. 2011. Findings of the 2011 workshop on statistical machine translation. In the Sixth Workshop on Statistical Machine Translation, pages 22–64, Edinburgh, Scotland. Chris Callison-Burch, Philipp Koehn, Christof Monz, Matt Post, Radu Soricut, and Lucia Specia. 2012. Findings of the 2012 workshop on statistical machine translation. In the Seventh Workshop on Statistical Machine Translation, pages 10–51, Montr´eal, Canada. Hal Daum´e III. 2007. Frustratingly easy domain adaptation. In the 45th Annual Meeting of the Association for Computational Linguistics, Prague, Czech Republic. Theodoros Evgeniou, Charles A. Micchelli, and Massimiliano Pontil. 2006. Learning multiple tasks with kernel methods. Journal of Machine Learning Research, 6(1):615. Peter A. Flach, Sebastian Spiegler, Bruno Gol´enia, Simon Price, John Guiver, Ralf Herbrich, Thore Graepel, and Mohammed J. Zaki. 2010. Novel tools to streamline the conference review process: experiences from SIGKDD’09. SIGKDD Explor. Newsl., 11(2):63–67, May. Mehmet G¨onen and Ethem Alpaydın. 2011. Multiple kernel learning algorithms. Journal of Machine Learning Research, 12:2211–2268. Perry Groot, Adriana Birlutiu, and Tom Heskes. 2011. Learning from multiple annotators with gaussian processes. In Proceedings of the 21st international conference on Artificial neural networks - Volume Part II, ICANN’11, pages 159–164, Espoo, Finland. Yifan He, Yanjun Ma, Josef van Genabith, and Andy Way. 2010. Bridging smt and tm with translation recommendation. In the 48th Annual Meeting of the Association for Computational Linguistics, pages 622–630, Uppsala, Sweden. Panagiotis G. Ipeirotis, Foster Provost, and Jing Wang. 2010. Quality management on amazon mechanical turk. In Proceedings of the ACM SIGKDD Workshop on Human Computation, HCOMP ’10, pages 64–67, Washington DC. Maarit Koponen, Wilker Aziz, Luciana Ramos, and Lucia Specia. 2012. Post-editing time as a measure of cognitive effort. In Proceedings of the AMTA 2012 Workshop on Post-editing Technology and Practice, WPTP 2012, San Diego, CA. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania. Mirko Plitt and Franc¸ois Masselot. 2010. A productivity test of statistical machine translation post-editing in a typical localisation context. Prague Bull. Math. Linguistics, 93:7–16. Tamara Polajnar, Simon Rogers, and Mark Girolami. 2011. Protein interaction detection in sentences via gaussian processes; a preliminary evaluation. Int. J. Data Min. Bioinformatics, 5(1):52–72, February. Christopher B. Quirk. 2004. Training a sentence-level machine translation confidence metric. In Proceedings of the International Conference on Language Resources and Evaluation, volume 4 of LREC 2004, pages 825–828, Lisbon, Portugal. Carl E. Rasmussen and Christopher K.I. Williams. 2006. Gaussian processes for machine learning, volume 1. MIT press Cambridge, MA. Vikas C. Raykar and Shipeng Yu. 2012. Eliminating spammers and ranking annotators for crowdsourced labeling tasks. J. Mach. Learn. Res., 13:491–518. Vikas C. Raykar, Shipeng Yu, Linda H. Zhao, Gerardo Hermosillo Valadez, Charles Florin, Luca Bogoni, and Linda Moy. 2010. Learning from crowds. J. Mach. Learn. Res., 99:1297–1322. Simon Rogers, Mark Girolami, and Tamara Polajnar. 2010. Semi-parametric analysis of multi-rater data. Statistics and Computing, 20(3):317–334. Victor S. Sheng, Foster Provost, and Panagiotis G. Ipeirotis. 2008. Get another label? Improving data quality and data mining using multiple, noisy labelers. In Proceedings of the 14th ACM SIGKDD, KDD’08, pages 614–622, Las Vegas, Nevada. Rion Snow, Brendan O’Connor, Daniel Jurafsky, and Andrew Y. Ng. 2008. Cheap and fast—but is it good? Evaluating non-expert annotations for natural language tasks. In the 2008 Conference on Empirical Methods in Natural Language Processing, pages 254–263, Honolulu, Hawaii. Radu Soricut and Abdessamad Echihabi. 2010. Trustrank: Inducing trust in automatic translations via ranking. In the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 612–621, Uppsala, Sweden, July. Lucia Specia, Marco Turchi, Nicola Cancedda, Marc Dymetman, and Nello Cristianini. 2009. Estimating the Sentence-Level Quality of Machine Translation Systems. In the 13th Annual Meeting of the European Association for Machine Translation (EAMT’2009), pages 28–37, Barcelona. 41 Lucia Specia, Dhwaj Raj, and Marco Turchi. 2010. Machine translation evaluation versus quality estimation. Machine Translation, pages 39–50. Lucia Specia. 2011. Exploiting Objective Annotations for Measuring Translation Post-editing Effort. In the 15th Annual Meeting of the European Association for Machine Translation (EAMT’2011), pages 73– 80, Leuven. Peter Welinder, Steve Branson, Serge Belongie, and Pietro Perona. 2010. The Multidimensional Wisdom of Crowds. In Advances in Neural Information Processing Systems, volume 23, pages 2424–2432. Jacob Whitehill, Paul Ruvolo, Ting-fan Wu, Jacob Bergsma, and Javier Movellan. 2009. Whose vote should count more: Optimal integration of labels from labelers of unknown expertise. Advances in Neural Information Processing Systems, 22:2035– 2043. 42
2013
4
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 402–411, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Extracting bilingual terminologies from comparable corpora Ahmet Aker, Monica Paramita, Robert Gaizauskas University of Sheffield ahmet.aker, m.paramita, r.gaizauskas@sheffield.ac.uk Abstract In this paper we present a method for extracting bilingual terminologies from comparable corpora. In our approach we treat bilingual term extraction as a classification problem. For classification we use an SVM binary classifier and training data taken from the EUROVOC thesaurus. We test our approach on a held-out test set from EUROVOC and perform precision, recall and f-measure evaluations for 20 European language pairs. The performance of our classifier reaches the 100% precision level for many language pairs. We also perform manual evaluation on bilingual terms extracted from English-German term-tagged comparable corpora. The results of this manual evaluation showed 60-83% of the term pairs generated are exact translations and over 90% exact or partial translations. 1 Introduction Bilingual terminologies are important for various applications of human language technologies, including cross-language information search and retrieval, statistical machine translation (SMT) in narrow domains and computer-aided assistance to human translators. Automatic construction of bilingual terminology mappings has been investigated in many earlier studies and various methods have been applied to this task. These methods may be distinguished by whether they work on parallel or comparable corpora, by whether they assume monolingual term recognition in source and target languages (what Moore (2003) calls symmetrical approaches) or only in the source (asymmetric approaches), and by the extent to which they rely on linguistic knowledge as opposed to simply statistical techniques. We focus on techniques for bilingual term extraction from comparable corpora – collections of source-target language document pairs that are not direct translations but are topically related. We choose to focus on comparable corpora because for many less widely spoken languages and for technical domains where new terminology is constantly being introduced, parallel corpora are simply not available. Techniques that can exploit such corpora to deliver bilingual terminologies are of significant practical interest in these cases. The rest of the paper is structured as follows. In Section 2 we outline our method. In Section 3 we review related work on bilingual term extraction. Section 4 describes feature extraction for term pair classification. In Section 5 we present the data used in our evaluations and discuss our results. Section 6 concludes the paper. 2 Method The method we present below for bilingual term extraction is a symmetric approach, i.e. it assumes a method exists for monolingual term extraction in both source and target languages. We do not prescribe what a term must be. In particular we do not place any particular syntactic restrictions on what constitutes an allowable term, beyond the requirement that terms must be contiguous sequences of words in both source and target languages. Our method works by first pairing each term extracted from a source language document S with each term extracted from a target language document T aligned with S in the comparable corpus. We then treat term alignment as a binary classification task, i.e. we extract features for each source-target language potential term pair and decide whether to classify the pair as a term equivalent or not. For classification purposes we use an SVM binary classifier. The training data for the classifier is derived from EUROVOC (Steinberger et al., 2002), a term thesaurus covering the activities of the EU and the European Parliament. We have run our approach on the 21 official EU languages covered by EUROVOC, constructing 20 language pairs with English as the source 402 language. Considering all these languages allows us to directly compare our method’s performance on resource-rich (e.g. German, French, Spanish) and under-resourced languages (e.g. Latvian, Bulgarian, Estonian). We perform two different tests. First, we evaluate the performance of the classifier on a held-out term-pair list from EUROVOC using the standard measures of recall, precision and F-measure. We run this evaluation on all 20 language pairs. Secondly, we test the system’s performance on obtaining bilingual terms from comparable corpora. This second test simulates the situation of using the term alignment system in a real world scenario. For this evaluation we collected English-German comparable corpora from Wikipedia, performed monolingual term tagging and ran our tool over the term tagged corpora to extract bilingual terms. 3 Related Work Previous studies have investigated the extraction of bilingual terms from parallel and comparable corpora. For instance, Kupiec (1993) uses statistical techniques and extracts bilingual noun phrases from parallel corpora tagged with terms. Daille et al. (1994), Fan et al. (2009) and Okita et al. (2010) also apply statistical methods to extract terms/phrases from parallel corpora. In addition to statistical methods Daille et al. use word translation information between two words within the extracted terms as a further indicator of the correct alignment. More recently, Bouamor et al. (2012) use vector space models to align terms. The entries in the vectors are co-occurrence statistics between the terms computed over the entire corpus. Bilingual term alignment methods that work on comparable corpora use essentially three sorts of information: (1) cognate information, typically estimated using some sort of transliteration similarity measure (2) context congruence, a measure of the extent to which the words that the source term co-occurs with have the same sort of distribution and co-occur with words with the same sort distribution as do those words that co-occur with the candidate term and (3) translation of component words in the term and/or in context words, where some limited dictionary exists. For example, in Rapp (1995), Fung and McKeown (1997), Morin et. al. (2007), Cao and Li (2002) and Ismail and Manandhar (2010) the context of text units is used to identify term mappings. Transliteration and cognate-based information is exploited in AlOnaizan and Knight (2002), Knight and Graehl (1998), Udupa et. al. (2008) and Aswani and Gaizauskas (2010). Very few approaches have treated term alignment as a classification problem suitable for machine learning (ML) techniques. So far as we are aware, only Cao and Li (2002), who treat only base noun phrase (NP) mapping, consider the problem this way. However, it naturally lends itself to being viewed as a classification task, assuming a symmetric approach, since the different information sources mentioned above can be treated as features and each source-target language potential term pairing can be treated as an instance to be fed to a binary classifier which decides whether to align them or not. Our work differs from that of Cao and Li (2002) in several ways. First they consider only terms consisting of nounnoun pairs. Secondly for a given source language term ⟨N1, N2⟩, target language candidate terms are proposed by composing all translations (given by a bilingual dictionary) of N1 into the target language with all translations of N2. We remove both these restrictions. By considering all terms proposed by monolingual term extractors we consider terms that are syntactically much richer than nounnoun pairs. In addition, the term pairs we align are not constrained by an assumption that their component words must be translations of each other as found in a particular dictionary resource. 4 Feature extraction To align or map source and target terms we use an SVM binary classifier (Joachims, 2002) with a linear kernel and the trade-off between training error and margin parameter c = 10. Within the classifier we use language dependent and independent features described in the following sections. 4.1 Dictionary based features The dictionary based features are language dependent and are computed using bilingual dictionaries which are created with GIZA++ (Och and Ney, 2000; Och and Ney, 2003). The DGT-TM parallel data (Steinberger et al., 2012) was input to GIZA++ to obtain the dictionaries. Dictionary entries have the form ⟨s, ti, pi⟩, where s is a source word, ti is the i-th translation of s in the dictionary and pi is the probability that s is translated by ti, the pi’s summing to 1 for each s in the dictionary. From the dictionaries we removed all entries with pi < 0.05. In addition we also removed 403 every entry from the dictionary where the source word was less than four characters and the target word more than five characters in length and vice versa. This step is performed to try to eliminate translation pairs where a stop word is translated into a non-stop word. After performing these filtering steps we use the dictionaries to extract the following language dependent features: • isFirstWordTranslated is a binary feature indicating whether the first word in the source term is a translation of the first word in the target term. To address the issue of compounding, e.g. for languages like German where what is a multi-word term in English may be expressed as a single compound word, we check whether the compound source term has an initial prefix that matches the translation of the first target word, provided that translation is at least 5 character in length. • isLastWordTranslated is a binary feature indicating whether the last word in the source term is a translation of the last word in the target term. As with the previous feature in case of compound terms we check whether the source term ends with the translation of the target last word. • percentageOfTranslatedWords returns the percentage of words in the source term which have their translations in the target term. To address compound terms we check for each source word translation whether it appears anywhere within the target term. • percentageOfNotTranslatedWords returns the percentage of words of the source term which have no translations in the target term. • longestTranslatedUnitInPercentage returns the ratio of the number of words within the longest contiguous sequence of source words which has a translation in the target term to the length of the source term, expressed as a percentage. For compound terms we proceed as with percentageOfTranslatedWords. • longestNotTranslatedUnitInPercentage returns the percentage of the number of words within the longest sequence of source words which have no translations in the target term. These six features are direction-dependent and are computed in both directions, reversing which language is taken as the source and which as the target. We also compute another feature averagePercentageOfTranslatedWords which builds the average between the feature values of percentageOfTranslatedWords from source to target and target to source. Thus in total we have 13 dictionary based features. Note for non-compound terms if we compare two words for equality we do not perform string match but rather use the Levenshtein Distance (see Section 4.2) between the two words and treat them as equal if the Levenshtein Distance returns >= 0.95. This is performed to capture words with morphological differences. We set 0.95 experimentally. 4.2 Cognate based features Dictionaries mostly fail to return translation entries for named entities (NEs) or specialized terminology. Because of this we also use cognate based methods to perform the mapping between source and target words or vice versa. Aker et al. (2012) have applied (1) Longest Common Subsequence Ratio, (2) Longest Common Substring Ratio, (3) Dice Similarity, (4) Needleman-Wunsch Distance and (5) Levenshtein Distance in order to extract parallel phrases from comparable corpora. We adopt these measures within our classifier. Each of them returns a score between 0 and 1. • Longest Common Subsequence Ratio (LCSR): The longest common subsequence (LCS) measure measures the longest common non-consecutive sequence of characters between two strings. For instance, the words “dollars” and “dolari” share a sequence of 5 non-consecutive characters in the same ordering. We make use of dynamic programming (Cormen et al., 2001) to implement LCS, so that its computation is efficient and can be applied to a large number of possible term pairs quickly. We normalize relative to the length of the longest term: LCSR(X, Y ) = len[LCS(X, Y )] max[len(X), len(Y )] where LCS is the longest common subsequence between two strings and characters in this subsequence need not be contiguous. The shorthand len stands for length. • Longest Common Substring Ratio (LCSTR): The longest common substring (LCST) measure is similar to the LCS measure, but measures the longest common 404 consecutive string of characters that two strings have in common. I.e. given two terms we need to find the longest character n-gram the terms share. The formula we use for the LCSTR measure is a ratio as in the previous measure: LCSTR(X, Y ) = len[LCST(X, Y )] max[len(X), len(Y )] • Dice Similarity: dice = 2 ∗LCST len(X) + len(Y ) • Needlemann Wunsch Distance (NWD): NWD = LCST min[len(X) + len(Y )] • Levenshtein Distance (LD): This method computes the minimum number of operations necessary to transform one string into another. The allowable operations are insertion, deletion, and substitution. Compared to the previous methods, which all return scores between 0 and 1, this method returns a score s that lies between 0 and n. The number n represents the maximum number of operations to convert an arbitrarily dissimilar string to a given string. To have a uniform score across all cognate methods we normalize s so that it lies between 0 and 1, subtracting from 1 to convert it from a distance measure to a similarty measure: LDnormalized = 1 − LD max[len(X), len(Y )] 4.3 Cognate based features with term matching The cognate methods assume that the source and target language strings being compared are drawn from the same character set and fail to capture the corresponding terms if this is not the case. For instance, the cognate methods are not directly applicable to the English-Bulgarian and EnglishGreek language pairs, as both the Bulgarian and Greek alphabets, which are Cyrillic-based, differ from the English Latin-based alphabet. However, the use of distinct alphabets is not the only problem when comparing source and target terms. Although most EU languages use the Latin alphabet, the occurrence of special characters and diacritics, as well spelling and phonetic variations, are further challenges which are faced by term or entity mapping methods, especially in determining the variants of the same mention of the entity (Snae, 2007; Karimi et al., 2011).1 We address this problem by mapping a source term to the target language writing system or vice versa. For mapping we use simple character mappings between the writing systems, such as α →a, φ →ph, etc., from Greek to English. The rules allow one character on the lefthand side (source language) to map onto one or more characters on the righthand side (target language). We created our rules manually based on sound similarity between source and target language characters. We created mapping rules for 20 EU language pairs using primarily Wikipedia as a resource for describing phonetic mappings to English. After mapping a term from source to target language we apply the cognate metrics described in 4.2 to the resulting mapped term and the original term in the other language. Since we perform both target to source and source to target mapping, the number of cognate feature scores on the mapped terms is 10 – 5 due to source to target mapping and 5 due to target to source mapping. 4.4 Combined features We also combined dictionary and cognate based features. The combined features are as follows: • isFirstWordCovered is a binary feature indicating whether the first word in the source term has a translation (i.e. has a translation entry in the dictionary regardless of the score) or transliteration (i.e. if one of the cognate metric scores is above 0.72) in the target term. The threshold 0.7 for transliteration similarity is set experimentally using the training data. To do this we iteratively ran feature extraction, trained the classifier and recorded precision on the training data using a threshold value chosen from the interval [0, 1] in steps of 0.1. We selected as final threshold value, the lowest value for which the precision score was the same as when the threshold value was set to 1. • isLastWordCovered is similar to the previous feature one but indicates whether the last word in the source term has a translation or 1Assuming the terms are correctly spelled, otherwise the misspelling is another problem. 2Note that we use the cognate scores obtained on the character mapped terms. 405 transliteration in the target term. If this is the case, 1 is returned otherwise 0. • percentageOfCoverage returns the percentage of source term words which have a translation or transliteration in the target term. • percentageOfNonCoverage returns the percentage of source term words which have neither a translation nor transliteration in the target term. • difBetweenCoverageAndNonCoverage returns the difference between the last two features. Like the dictionary based features, these five features are direction-dependent and are computed in both directions – source to target and target to source, resulting in 10 combined features. In total we have 38 features – 13 features based on dictionary translation as described in Section 4.1, 5 cognate related features as outlined in Section 4.2, 10 cognate related features derived from character mappings over terms as described in Section 4.3 and 10 combined features. 5 Experiments 5.1 Data Sources In our experiments we use two different data resources: EUROVOC terms and comparable corpora collected from Wikipedia. 5.1.1 EUROVOC terms EUROVOC is a term thesaurus covering the activities of the EU and the European Parliament in particular. It contains 6797 term entries in 24 different languages including 22 EU languages and Croatian and Serbian (Steinberger et al., 2002). 5.1.2 Comparable Corpora We also built comparable corpora in the information technology (IT) and automotive domains by gathering documents from Wikipedia for the English-German language pair. First, we manually chose one seed document in English as a starting point for crawling in each domain3. We then identified all articles to which the seed document is linked and added them to the crawling queue. This process is performed recursively for each document in the queue. Since our aim is to build a comparable corpus, we only added English 3http://en.wikipedia.org/wiki/Information technology for IT and http://en.wikipedia.org/wiki/Automotive industry for automotive domain. documents which have an inter-language link in Wikipedia to a German document. We set a maximum depth of 3 in the recursion to limit size of the crawling set, i.e. documents are crawled only if they are within 3 clicks of the seed documents. A score is then calculated to represent the importance of each document di in this domain: scoredi = n X j=1 freqdij depthdj where n is the total number of documents in the queue, freqdij is 1 if di is linked to dj, or 0 otherwise, and depthdj is the number of clicks between dj and the seed document. After all documents in the queue were assigned a score, we gathered the top 1000 documents and used inter-language link information to extract the corresponding article in the target language. We pre-processed each Wikipedia article by performing monolingual term tagging using TWSC (Pinnis et al., 2012). TWSC is a term extraction tool which identifies terms ranging from one to four tokens in length. First, it POS-tags each document. For German POS-tagging we use TreeTagger (Schmid, 1995). Next, it uses term grammar rules, in the form of sequences of POS tags or non-stop words, to identify candidate terms. Finally, it filters the candidate terms using various statistical measures, such as pointwise mutual information and TF*IDF. 5.2 Performance test of the classifier To test the classifier’s performance we evaluated it against a list of positive and negative examples of bilingual term pairs using the measures of precision, recall and F-measure. We used 21 EU official languages, including English, and paired each non-English language with English, leading to 20 language pairs.4 In the evaluation we used 600 positive term pairs taken randomly from the EUROVOC term list. We also created around 1.3M negative term pairs by pairing a source term with 200 randomly chosen distinct target terms. We select such a large number to simulate the real application scenario where the classifier will be confronted with a huge number of negative cases 4Note that we do not use the Maltese-English language pair, as for this pair we found that 5861 out of 6797 term pairs were identical, i.e. the English and the Maltese terms were the same. Excluding Maltese, the average number of identical terms between a non-English language and English in the EUROVOC data is 37.7 (out of a possible 6797). 406 Table 1: Wikipedia term pairs processed and judged as positive by the classifier. Processed Positive DE IT 11597K 3249 DE Automotive 12307K 1772 and a relatively small number of positive pairs. The 600 positive examples contain 200 single term pairs (i.e. single word on both sides), 200 term pairs with a single word on only one side (either source or target) and 200 term pairs with more than one word on each side. For training we took the remaining 6200 positive term pairs from EUROVOC and constructed another 6200 term pairs as negative examples, leading to total of 12400 term pairs. To construct the 6200 negative examples we used the 6200 terms on the source side and paired each source term with an incorrect target term. Note that we ensure that in both training and testing the set of negative and positive examples do not overlap. Furthermore, we performed data selection for each language pair separately. This means that the same pairs found in, e.g., English-German are not necessarily the same as in English-Italian. The reason for this is that the translation lengths, in number of words, vary between language pairs. For instance adult education is translated into Erwachsenenbildung in German and contains just a single word (although compound). The same term is translated into istruzione degli adulti in Italian and contains three words. For this reason we carry out the data preparation process separately for each language pair in order to obtain the three term pair sets consisting of term pairs with only a single word on each side, term pairs with a single word on just one side and term pairs with multiple words on both sides. 5.3 Manual evaluation For this evaluation we used the Wikipedia comparable corpora collected for the English-German (EN-DE) language pair. For each pair of Wikipedia articles we used the terms tagged by TWSC and aligned each source term with every target term. This means if both source and target articles contain 100 terms then this leads to 10K term pairs. We extracted features for each pair of terms and ran the classifier to decide whether the pair is positive or negative. Table 1 shows the number of term pairs processed and the count of pairs classified as positive. Table 2 shows five positive term pairs extracted from the EnglishGerman comparable corpora for each of the IT and automotive domains. We manually assessed a subset of the positive examples. We asked human assessors to categorize each term pair into one of the following categories: 1. Equivalence: The terms are exact translations/transliterations of each other. 2. Inclusion: Not an exact translation/transliteration, but an exact translation/transliteration of one term is entirely contained within the term in the other language, e.g: “F1 car racing” vs “Autorennen (car racing)”. 3. Overlap: Not category 1 or 2, but the terms share at least one translated/transliterated word, e.g: “hybrid electric vehicles” vs “hybride bauteile (hybrid components)”. 4. Unrelated: No word in either term is a translation/transliteration of a word in the other. In the evaluation we randomly selected 300 pairs for each domain and showed them to two German native speakers who were fluent in English. We asked the assessors to place each of the term pair into one of the categories 1 to 4. 5.4 Results and Discussion 5.4.1 Performance test of the classifier The results of the classifier evaluation are shown in Table 3. The results show that the overall performance of the classifier is very good. In many cases the precision scores reach 100%. The lowest precision score is obtained for Lithuanian (LT) with 67%. For this language we performed an error analysis. In total there are 221 negative examples classified as positive. All these terms are multi-term, i.e. each term pair contains at least two words on each side. For the majority of the misclassified terms – 209 in total – 50% or more of the words on one side are either translations or cognates of words on the other side. Of these, 187 contained 50% or more translation due to cognate words – examples of such cases are capital increase – kapitalo eksportas or Arab organisation – Arabu lyga with the cognates capital – kapitalo and Arab – Arabu respectively. For the remainder, 50% or more of the words on one side are dictionary translations of words on the other side. In order to understand the reason why the classifier treats such cases as positive we examined the 407 Table 2: Example positive pairs for English-German. IT Automotive chromatographic technique — chromatographie methode distribution infrastructure — versorgungsinfrastruktur electrolytic capacitor — elektrolytkondensatoren ambient temperature — außenlufttemperatur natural user interfaces — nat¨urliche benutzerschnittstellen higher cetane number — erh¨ohter cetanzahl anode voltage — anodenspannung fuel tank — kraftstoffpumpe digital subscriber loop — digitaler teilnehmeranschluss hydrogen powered vehicle — wasserstoff fahrzeug Table 3: Classifier performance results on EUROVOC data (P stands for precision, R for recall and F for F-measure). Each language is paired with English. The test set contains 600 positive and 1359400 negative examples. ET HU NL DA SV DE LV FI PT SL FR IT LT SK CS RO PL ES EL BG P 1 1 .98 1 1 .98 1 1 .7 1 1 1 .67 .81 1 1 1 1 1 1 R .67 .72 .82 .69 .81 .77 .78 .65 .82 .66 .66 .7 .77 .84 .72 .78 .69 .8 .78 .79 F .80 .83 .89 .81 .89 .86 .87 .78 .75 .79 .79 .82 .71 .91 .83 .87 .81 .88 .87 .88 training data and found 467 positive pairs which had the same characteristics as the negative examples in the testing set classified. We removed these 467 entries from the training set and re-trained the classifier. The results with the new classifier are 99% precision, 68% recall and 80% F score. In addition to Lithuanian, two further languages, Portuguese (PT) and Slovak (SK), also had substantially lower precision scores. For these languages we also removed positive entries falling into the same problem categories as the LT ones and trained new classifiers with the filtered training data. The precision results increased substantially for both PT and SK – 95% precision, 76% recall, 84% F score for PT and 94% precision, 72% recall, 81% F score for SK. The recall scores are lower than the precision scores, ranging from 65% to 84%. We have investigated the recall problem for FI, which has the lowest recall score at 65%. We observed that all the missing term pairs were not cognates. Thus, the only way these terms could be recognized as positive is if they are found in the GIZA++ dictionaries. However, due to data sparsity in these dictionaries this did not happen in these cases. For these term pairs either the source or target terms were not found in the dictionaries. For instance, for the term pair offshoring — uudelleensijoittautuminen the GIZA++ dictionary contains the entry offshoring but according to the dictionary it is not translated into uudelleensijoittautuminen, which is the matching term in EUROVOC. 5.4.2 Manual evaluation The results of the manual evaluation are shown in Table 4. From the results we can see that both assessors judge above 80% of the IT domain terms as category 1 – the category containing equivalent Table 4: Results of the EN-DE manual evaluation by two annotators. Numbers reported per category are percentages. Domain Ann. 1 2 3 4 IT P1 81 6 6 7 P2 83 7 7 3 Automotive P1 66 12 16 6 P2 60 15 16 9 term pairs. Only a small proportion of the term pairs are judged as belonging to category 4 (3–7%) – the category containing unrelated term pairs. For the automotive domain the proportion of equivalent term pairs varies between 60 and 66%. For unrelated term pairs this is below 10% for both assessors. We investigated the inter-annotator agreement. Across the four classes the percentage agreement was 83% for the automotive domain term pairs and 86% for the IT domain term pairs. The kappa statistic, κ, was .69 for the automotive domain pairs and .52 for the IT domain. We also considered two class agreement where we treated term pairs within categories 2 and 3 as belonging to category 4 (i.e. as “incorrect” translations). In this case, for the automotive domain the percentage agreement was 90% and κ = 0.72 and for the IT domain percentage agreement was 89% with κ = 0.55. The agreement in the automotive domain is higher than in the IT one although both judges were computer scientists. We analyzed the differences and found that they differ in cases where the German and the English term are both in English. One of the annotators treated such cases as correct translation, whereas the other did not. We also checked to ensure our technique was not simply rediscovering our dictionaries. Since the GIZA++ dictionaries contain only single word–single word mappings, we examined the 408 newly aligned term pairs that consisted of one word on both source and target sides. Taking both the IT and automotive domains together, our algorithm proposed 5021 term pairs of which 2751 (55%) were word-word term pairs. 462 of these (i.e. 17% of the word-word term pairs or 9% of the overall set of aligned term pairs) were already in either the EN-DE or DE-EN GIZA++ dictionaries. Thus, of our newly extracted term pairs a relatively small proportion are rediscovered dictionary entries. We also checked our evaluation data to see what proportion of the assessed term pairs were already to be found in the GIZA++ dictionaries. A total of 600 term pairs were put in front of the judges of which 198 (33%) were word-word term pairs. Of these 15 (less than 8% of the word-word pairs and less then 3% of the overall assessed set of assessed term pairs) were word-word pairs already in the dictionaries. We conclude that our evaluation results are not unduly affected by assessing term pairs which were given to the algorithm. Error analysis For both domains we performed an error analysis for the unrelated, i.e. category 4 term pairs. We found that in both domains the main source of errors is due to terms with different meanings but similar spellings such as the following example (1). (1) accelerator — decelerator For this example the cognate methods, e.g. the Levenshtein similarity measure, returns a score of 0.81. This problem could be addressed in different ways. First, it could be resolved by applying a very high threshold for the cognate methods. Any cognate score below that threshold could be regarded as zero – as we did for the combined features (cf. Section 4.4). However, setting a similarity threshold higher than 0.9 – to filter out cases as in (1) – will cause real cognates with greater variation in the spellings to be missed. This will, in particular, affect languages with a lot of inflection, such as Latvian. Another approach to address this problem would be to take the contextual or distributional properties of the terms into consideration. To achieve this, training data consisting of term pairs along with contextual information is required. However, such training data does not currently exist (i.e. resources like EUROVOC do not contain contextual information) and it would need to be collected as a first step towards applying this approach to the problem. Partial Translation The assessors assigned 6 – 7% of the term pairs in the IT domain and 12 – 16% in the automotive domain to categories 2 and 3. In both categories the term pairs share translations or cognates. Clearly, if humans such as professional translators are the end users of these terms, then it could be helpful for them to find some translation units within the terms. In category 2 this will be the entire translation of one term in the other such as the following examples.5 (2) visible graphical interface — grafische benutzerschnittstelle (3) modern turbocharger systems — moderne turbolader In example (3) the a translation of the German term is to be found entirely within in the English term but the English term has the additional word visible, a translation of which is not found in the German term. In example (4), again the translation of the German term is entirely found in the English term, but as in the previous example, one of the English words – systems – in this case, has no match within the German term. In category 3 there are only single word translation overlaps between the terms as shown in the following examples. (4) national standard language — niederl¨andischen standardsprache (5) thermoplastic material — thermoplastische elastomere In example (5) standard language is translated to standardsprache and in example (6) thermoplastic to thermoplastische. The other words within the terms are not translations of each other. Another application of the extracted term pairs is to use them to enhance existing parallel corpora to train SMT systems. In this case, including the partially correct terms may introduce noise. This is especially the case for the terms within category 3. However, the usefulness of terms in both these scenarios requires further investigation, which we aim to do in future work. 5In our data it is always the case that the target term is entirely translated within the English one and the other way round. 409 6 Conclusion In this paper we presented an approach to align terms identified by a monolingual term extractor in bilingual comparable corpora using a binary classifier. We trained the classifier using data from the EUROVOC thesaurus. Each candidate term pair was pre-processed to extract various features which are cognate-based or dictionary-based. We measured the performance of our classifier using Information Retrieval (IR) metrics and a manual evaluation. In the IR evaluation we tested the performance of the classifier on a held out test set taken from EUROVOC. We used 20 EU language pairs with English being always the source language. The performance of our classifier in this evaluation reached the 100% precision level for many language pairs. In the manual evaluation we had our algorithm extract pairs of terms from Wikipedia articles – articles forming comparable corpora in the IT and automotive domains – and asked native speakers to categorize a selection of the term pairs into categories reflecting the level of translation of the terms. In the manual evaluation we used the English-German language pair and showed that over 80% of the extracted term pairs were exact translations in the IT domain and over 60% in the automotive domain. For both domains over 90% of the extracted term pairs were either exact or partial translations. We also performed an error analysis and highlighted problem cases, which we plan to address in future work. Exploring ways to add contextual or distributional features to our term representations is also an avenue for future work, though it clearly significantly complicates the approach, one of whose advantages is its simplicitiy. Furthermore, we aim to extend the existing dictionaries and possibly our training data with terms extracted from comparable corpora. Finally, we plan to investigate the usefulness of the terms in different application scenarios, including computer assisted translation and machine translation. Acknowledgements The research reported was funded by the TaaS project, European Union Seventh Framework Programme, grant agreement no. 296312. The authors would like to thank the manual annotators for their helpful contributions. We would also like to thank partners at Tilde SIA and at the University of Zagreb for supplying the TWSC term extraction tool, developed within the EU funded project ACCURAT. References A. Aker, Y. Feng, and R. Gaizauskas. 2012. Automatic bilingual phrase extraction from comparable corpora. In 24th International Conference on Computational Linguistics (COLING 2012), IIT Bombay, Mumbai, India, 2012. Association for Computational Linguistics. Y. Al-Onaizan and K. Knight. 2002. Machine transliteration of names in arabic text. In Proceedings of the ACL-02 workshop on Computational approaches to semitic languages, pages 1–13. Association for Computational Linguistics. N. Aswani and R. Gaizauskas. 2010. English-hindi transliteration using multiple similarity metrics. In Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC 2010), Valetta, Malta. D. Bouamor, N. Semmar, and P. Zweigenbaum. 2012. Identifying bilingual multi-word expressions for statistical machine translation. In LREC 2012, Eigth International Conference on Language Resources and Evaluation, pages 674-679, Istanbul, Turkey, 2012. ELRA. Y. Cao and H. Li. 2002. Base noun phrase translation using web data and the em algorithm. In Proceedings of the 19th international conference on Computational linguistics-Volume 1, pages 1–7. Association for Computational Linguistics. T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein. 2001. Introduction to Algorithms. The MIT Press, 2nd revised edition, September. B. Daille, ´E. Gaussier, and J.M. Lang´e. 1994. Towards automatic extraction of monolingual and bilingual terminology. In Proceedings of the 15th conference on Computational linguistics-Volume 1, pages 515– 521. Association for Computational Linguistics. X. Fan, N. Shimizu, and H. Nakagawa. 2009. Automatic extraction of bilingual terms from a chinesejapanese parallel corpus. In Proceedings of the 3rd International Universal Communication Symposium, pages 41–45. ACM. P. Fung and K. McKeown. 1997. Finding terminology translations from non-parallel corpora. In Proceedings of the 5th Annual Workshop on Very Large Corpora, pages 192–202. A. Ismail and S. Manandhar. 2010. Bilingual lexicon extraction from comparable corpora using indomain terms. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters, pages 481–489. Association for Computational Linguistics. 410 T. Joachims. 2002. Learning to classify text using support vector machines: Methods, theory and algorithms, volume 186. Kluwer Academic Publishers Norwell, MA, USA:. S. Karimi, F. Scholer, and A. Turpin. 2011. Machine transliteration survey. ACM Computing Surveys (CSUR), 43(3):17. K. Knight and J. Graehl. 1998. Machine transliteration. Computational Linguistics, 24(4):599–612. J. Kupiec. 1993. An algorithm for finding noun phrase correspondences in bilingual corpora. In Proceedings of the 31st annual meeting on Association for Computational Linguistics, pages 17–22. Association for Computational Linguistics. R. Moore. 2003. Learning translations of namedentity phrases from parallel corpora. In In Proceedings of the tenth conference on European chapter of the Association for Computational LinguisticsVolume 1, pages 259266. Association for Computational Linguistics. E. Morin, B. Daille, K. Takeuchi, and K. Kageura. 2007. Bilingual terminology mining - using brain, not brawn comparable corpora. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 664–671, Prague, Czech Republic, June. Association for Computational Linguistics. F. J. Och and H. Ney. 2000. A comparison of alignment models for statistical machine translation. In Proceedings of the 18th conference on Computational linguistics, pages 1086–1090, Morristown, NJ, USA. Association for Computational Linguistics. F. J. Och Och and H. Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19–51. T. Okita, A. Maldonado Guerra, Y. Graham, and A. Way. 2010. Multi-word expression-sensitive word alignment. Association for Computational Linguistics. M¯arcis Pinnis, Nikola Ljubeˇsi´c, Dan S¸tef˘anescu, Inguna Skadin¸a, Marko Tadi´c, and Tatiana Gornostay. 2012. Term extraction, tagging, and mapping tools for under-resourced languages. In Proc. of the 10th Conference on Terminology and Knowledge Engineering (TKE 2012), June, pages 20–21. R. Rapp. 1995. Identifying word translations in nonparallel texts. In Proceedings of the 33rd annual meeting on Association for Computational Linguistics, pages 320–322. Association for Computational Linguistics. Helmut Schmid. 1995. Treetagger— a language independent part-of-speech tagger. Institut f¨ur Maschinelle Sprachverarbeitung, Universit¨at Stuttgart, page 43. C. Snae. 2007. A comparison and analysis of name matching algorithms. International Journal of Applied Science. Engineering and Technology, 4(1):252–257. R. Steinberger, B. Pouliquen, and J. Hagman. 2002. Cross-lingual document similarity calculation using the multilingual thesaurus eurovoc. Computational Linguistics and Intelligent Text Processing, pages 101–121. R. Steinberger, A. Eisele, S. Klocek, S. Pilos, and P. Schlter. 2012. Dgt-tm: A freely available translation memory in 22 languages. In Proceedings of LREC, pages 454–459. R. Udupa, K. Saravanan, A. Kumaran, and J. Jagarlamudi. 2008. Mining named entity transliteration equivalents from comparable corpora. In Proceeding of the 17th ACM conference on Information and knowledge management, pages 1423–1424. ACM. 411
2013
40
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 412–422, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics The Haves and the Have-Nots: Leveraging Unlabelled Corpora for Sentiment Analysis Kashyap Popat2 Balamurali A R1,2,3 Pushpak Bhattacharyya2 Gholamreza Haffari3 1IITB-Monash Research Academy, IIT Bombay 3Monash University 2Dept. of Computer Science and Engineering, IIT Bombay Australia {kashyap,balamurali,pb}@cse.iitb.ac.in [email protected] Abstract Expensive feature engineering based on WordNet senses has been shown to be useful for document level sentiment classification. A plausible reason for such a performance improvement is the reduction in data sparsity. However, such a reduction could be achieved with a lesser effort through the means of syntagma based word clustering. In this paper, the problem of data sparsity in sentiment analysis, both monolingual and cross-lingual, is addressed through the means of clustering. Experiments show that cluster based data sparsity reduction leads to performance better than sense based classification for sentiment analysis at document level. Similar idea is applied to Cross Lingual Sentiment Analysis (CLSA), and it is shown that reduction in data sparsity (after translation or bilingual-mapping) produces accuracy higher than Machine Translation based CLSA and sense based CLSA. 1 Introduction Data sparsity is the bane of Natural Language Processing (NLP) (Xue et al., 2005; Minkov et al., 2007). Language units encountered in the test data but absent in the training data severely degrade the performance of an NLP task. NLP applications innovatively handle data sparsity through various means. A special, but very common kind of data sparsity viz., word sparsity, can be addressed in one of the two obvious ways: 1) sparsity reduction through paradigmatically related words or 2) sparsity reduction through syntagmatically related words. Paradigmatic analysis of text is the analysis of concepts embedded in the text (Cruse, 1986; Chandler, 2012). WordNet is a byproduct of such an analysis. In WordNet, paradigms are manually generated based on the principles of lexical and semantic relationship among words (Fellbaum, 1998). WordNets are primarily used to address the problem of word sense disambiguation. However, at present there are many NLP applications which use WordNet. One such application is Sentiment Analysis (SA) (Pang and Lee, 2002). Recent research has shown that word sense based semantic features can improve the performance of SA systems (Rentoumi et al., 2009; Tamara et al., 2010; Balamurali et al., 2011) compared to word based features. Syntagmatic analysis of text concentrates on the surface properties of the text. Compared to paradigmatic property extraction, syntagmatic processing is relatively light weight. One of the obvious syntagmas is words, and words are grouped into equivalence classes or clusters, thus reducing the model parameters of a statistical NLP system (Brown et al., 1992). When used as an additional feature with word based language models, it has been shown to improve the system performance viz., machine translation (Uszkoreit and Brants, 2008; Stymne, 2012), speech recognition (Martin et al., 1995; Samuelsson and Reichl, 1999), dependency parsing (Koo et al., 2008; Haffari et al., 2011; Zhang and Nivre, 2011; Tratz and Hovy, 2011) and NER (Miller et al., 2004; Faruqui and Pad´o, 2010; Turian et al., 2010; T¨ackstr¨om et al., 2012). In this paper, the focus is on alleviating the data sparsity faced by supervised approaches for SA through the means of cluster based features. As WordNets are essentially word 412 clusters wherein words with the same meaning are clubbed together, they address the problem of data sparsity at word level. The abstraction and dimensionality reduction thus achieved attributes to the superior performance for SA systems that employs WordNet senses as features. However, WordNets are manually created. Automatic creation of the same is challenging and not much successful because of the linguistic complexity involved. In case of SA, manually creating the features based on WordNet senses is a tedious and an expensive process. Moreover, WordNets are not present for many languages. All these factors make the paradigmatic property based cluster features like WordNet senses a less promising pursuit for SA. The syntagmatic analysis essentially makes use of distributional similarity and may in many circumstances subsume the paradigmatic analysis. In the current work, this particular insight is used to solve the data sparsity problem in the sentiment analysis by leveraging unlabelled monolingual corpora. Specifically, experiments are performed to investigate whether features developed from manually crafted clusterings (coming from WordNet) can be replaced by those generated from clustering based on syntagmatic properties. Further, cluster based features are used to address the problem of scarcity of sentiment annotated data in a language. Popular approaches for Cross-Lingual Sentiment Analysis (CLSA) (Wan, 2009; Duh et al., 2011) depend on Machine Translation (MT) for converting the labeled data from one language to the other (Hiroshi et al., 2004; Banea et al., 2008; Wan, 2009). However, many languages which are truly resource scarce, do not have an MT system or existing MT systems are not ripe to be used for CLSA (Balamurali et al., 2013). To perform CLSA, this study leverages unlabelled parallel corpus to generate the word alignments. These word alignments are then used to link cluster based features to obliterate the language gap for performing SA. No MT systems or bilingual dictionaries are used for this study. Instead, language gap for performing CLSA is bridged using linked cluster or cross-lingual clusters (explained in section 4) with the help of unlabelled monolingual corpora. The contributions of this paper are two fold: 1. Features created from manually built and finer clusters can be replaced by inexpensive cluster based features generated solely from unlabelled corpora. Experiments performed on four publicly available datasets in three languages viz., English, Hindi and Marathi1 suggest that cluster based features can considerably boost the performance of an SA system. Moreover, state of the art result is obtained for one of the publicly available dataset. 2. An alternative and effective approach for CLSA is demonstrated using clusters as features. Word clustering is a powerful mechanism to “transfer” a sentiment classifier from one language to another. Thus can be used in truly resource scarce scenarios like that of English-Marathi CLSA. The rest of the paper is organized as follows: section 2 presents related work. Section 3 explains different word cluster based features employed to reduce data sparsity for monolingual SA. In section 4, alternative CLSA approaches based on word clustering are elucidated. Experimental details are explained in section 5. Results and discussions are presented in section 6 and section 7 respectively. Finally, section 8 concludes the paper pointing to some future research possibilities. 2 Related Work The problem of SA at document level is defined as the classification of document into different polarity classes (positive and negative) (Turney, 2002). Both supervised (Benamara et al., 2007; Martineau and Finin, 2009) and unsupervised approaches (Mei et al., 2007; Lin and He, 2009) exist for this task. Supervised approaches are popular because of their superior classification accuracy (Mullen and Collier, 2004; Pang and Lee, 2008). Feature engineering plays an important role in these systems. Apart from the commonly used bag-of-words features based on unigrams/bigrams/ngrams (Dave et al., 2003; Ng et al., 2006; Martineau and Finin, 2009), 1Hindi and Marathi belong to the Indo-Aryan subgroup of the Indo-European language family and are two widely spoken Indian languages with a speaker population of 450 million and 72 million respectively. 413 syntax (Matsumoto et al., 2005; Nakagawa et al., 2010), semantic (Balamurali et al., 2011) and negation (Ikeda et al., 2008) have also been explored for this task. There has been research related to clustering and sentiment analysis. In Rooney et al. (2011), documents are clustered based on the context of each document and sentiment labels are attached at the cluster level. Zhai et al. (2011) attempts to cluster features of a product to perform sentiment analysis on product reviews. In this work, word clusters (syntagmatic and paradigmatic) encoding a mixture of syntactic and semantic information are used for feature engineering. In situations where labeled data is not present in a language, approaches based on cross-lingual sentiment analysis are used. Most often these methods depend on an intermediary machine translation system (Wan, 2009; Brooke et al., 2009) or a bilingual dictionary (Ghorbel and Jacot, 2011; Lu et al., 2011) to bridge the language gap. Given the subtle and different ways the sentiment can be expressed which itself manifested as a result of cultural diversity amongst different languages, an MT system has to be of a superior quality to capture them. 3 Clustering for Sentiment Analysis The goal of this paper, to remind the reader, is to investigate whether superior word cluster features based on manually crafted and fine grained lexical resource like WordNet can be replaced with the syntagmatic property based word clusters created from unlabelled monolingual corpora. In this section, different clustering approaches are presented for feature engineering in a monolingual setting. 3.1 Approach 1: Clustering based on WordNet Sense A synonymous set of words in a WordNet is called a synset. Each synset can be considered as a word cluster comprising of semantically similar words. Balamurali et al. (2011) showed that WordNet synsets can act as good features for document level sentiment classification. Motivation for their study stems from the fact that different senses of a word can have different polarities. To empirically prove the superiority of sense based features, different variants of a travel review domain corpus were generated by using automatic/manual sense disambiguation techniques. Thereafter, accuracies of classifiers based on different sense-based and word-based features were compared. The results suggested that WordNet synset based features performed better than word-based features. In this study, synset identifiers are extracted from manually/automatically sense annotated corpora and used as features for creating sentiment classifiers. The classifier thus build is used as a baseline. Apart from this, another baseline employing word based features are used for a comprehensive comparison. 3.2 Approach 2: Syntagmatic Property based Clustering For this particular study, a co-occurrence based algorithm is used to create word clusters. As the algorithm is based on co-occurrence, one can extract the classes that have the flavour of syntagmatic grouping, depending on the nature of underlying statistics. Agglomerative clustering algorithm by Brown et al. (1992) is used for this purpose. It is a hard clustering algorithm i.e., each word belongs to one cluster only. Formally, as mentioned in Brown et al. (1992), let C be a hard clustering function which maps vocabulary V to one of the K clusters. Then, the likelihood (L()) of a sequence of word tokens, w = [wj]m j=1, with wj ∈V , can be factored as, L(w; C) = m Y j=1 p(wj|C(wj))p(C(wj)|C(wj−1))) (1) Words are assigned to clusters such that the above quantity is maximized. For the purpose of sentiment classification, cluster identifiers representing words in the document are used as features for training. 4 Clustering for Cross Lingual Sentiment Analysis Existing approaches for CLSA depend on an intermediary machine translation system to bridge the language gap (Hiroshi et al., 2004; Banea et al., 2008). Machine translation is very resource intensive. If a language is truly resource scarce, it is mostly unlikely to have an MT system. Given that sentiment analysis is a less resource intensive task compared to machine translation, the use of an MT system is hard to justify for performing 414 CLSA. As a viable alternative, cluster linkages could be learned from a bilingual parallel corpus and these linkages can be used to bridge the language gap for CLSA. In this section, three approaches using clusters as features for CLSA are compared. The language whose annotated data is used for training is called the source language (S), while the language whose documents are to be sentiment classified is referred to as the target language (T). 4.1 Approach 1: Projection based on Sense (PS) In this approach, a Multidict is used to bridge the language gap for SA. A Multidict is an instance of WordNet where the same sense from different languages are linked (Mohanty et al., 2008). An entry in the multidict will have a WordNet sense identifier from S and the corresponding WordNet sense identifier from T. The approach of projection based on sense is explained in Algorithm 1. Note that after the Sense Mark operation, each document will be represented as a vector of WordNet sense identifiers. Algorithm 1 Projection based on sense Input: Polarity labeled data in source language (S) and data in target language (T) to be labeled Output: Classified documents 1: Sense mark the polarity labeled data from S 2: Project the sense marked corpora from S to T using a Multidict 3: Model the sentiment classifier using the data obtained in step-2 4: Sense mark the unlabelled data from T 5: Test the sentiment classifier on data obtained in step-4 using model obtained in step-3 Sense identifiers are the features for the classifier. For those sense identifiers which do not have a corresponding entry in the Multidict, no projection is performed. 4.2 Approach 2: Direct Cluster Linking (DCL) Given a parallel bilingual corpus, word clusters in S can be aligned to clusters in T. Word alignments are created using parallel corpora. Given two aligned word sequences wS = [wS j ]m j=1 and wT = [wT k ]n k=1, let αT|S be a set of scored alignments from the source language to the target language. Here, an alignment from the akth source word to the kth target word, with score sk,ak > ε is represented as (wT k , wS ak, sk,ak) ∈αT|S. To simplify, k ∈αT|S is used to denote those target words wT k that are aligned to some source word wS ak. The source and the target side clusters are linked using the Equation (2). LC(l) = argmax t X k∈αT |S ∪αS|T s.t.CT (wT k )=t CS(wS ak )=l sk,ak (2) Here, a target side cluster t ∈CT is linked to a source side cluster l ∈CS such that the total alignment score between words in l and words in t is maximum. CS and CT stands for source and target side cluster list respectively. LC(l) gives the target side cluster t to which l is linked. 4.3 Approach 3: Cross-Lingual Clustering (XC) Direct cluster linking approach suffers from the size of alignment dataset in the form of parallel corpora. The size of the alignment dataset is typically smaller than the monolingual dataset. To circumvent this problem, T¨ackstr¨om et al. (2012) introduced cross-lingual clustering. In cross-lingual clustering, the objective function maximizes the joint likelihood of monolingual and cross-lingual factors. Given a list of words and clusters it belongs to, a clustering algorithm tries to obtain word-cluster association which maximizes the joint likelihood of words and clusters. Whereas in case of crosslingual clustering, the same clustering can be explained in terms of maximizing the likelihood of monolingual word-cluster pairs of the source, the target and alignments between them. Formally, as stated in T¨ackstr¨om et al. (2012), Using the model of Uszkoreit and Brants (2008), the likelihood of a sequence of word tokens, w = [wj]m j=1, with wj ∈V , can be factored as, L(w; C) = m Y j=1 p(wj|C(wj))p(C(wj)|wj−1)) (3) Note this is different from the likelihood estimation of Brown et al. (1992) (Equation (1)), where C(wj) was conditioned on C(wj−1). This 415 makes the computation easier as suggested in the original paper. The Equation (3) in a cross lingual setting will be transformed as given below: LS,T(wS, wT ; αT|S, αS|T , CS, CT ) = LS(...).LT (...).LT|S(...).LS|T (...) (4) Here, LT|S(...) and LS|T(...) are factors based on word alignments, which can be represented as: LT|S(wT ; αT|S, CT , CS) = Y k∈αT |S p(wT k |CT (wT k ))p(CT (wT k )|CS(wS ak))) (5) Based on the optimization objective in Equation (4), a pseudo algorithm is defined in Algorithm 2. For more information, readers are requested to refer T¨ackstr¨om et al. (2012). Algorithm 2 Cross-lingual Clustering (XC) Input: Source and target language corpus Output: Cross-lingual clusters 1: ## CS, CT randomly initialized 2: for i ←1 to N do 3: Find CS ∗≈argmaxCS LS(wS; CS) 4: Project CS ∗to CT 5: Find CT ∗≈argmaxCT LT (wT ; CT) 6: Project CT ∗to CS 7: end for An MT based CLSA approach is used as the baseline. Training data from S is translated to T and classification model is learned using unigram based features. Thereafter, the classifier is directly tested on data from T. 5 Experimental Setup Analysis was performed on three languages, viz., English (En), Hindi (Hi) and Marathi (Mar). CLSA was performed on two language pairs, English-Hindi and English-Marathi. For clustering the words, monolingual data of Indian Languages Corpora Initiative (ILCI)2 was used. It should also be noted that sentiment annotated data was also included in the data used for the word clusterings process. For Brown clustering, an implementation by Liang (2005) was used. Cross-lingual clustering for CLSA 2http://sanskrit.jnu.ac.in/ilci/index. jsp was implemented as directed in T¨ackstr¨om et al. (2012). Monolingual SA: For experiments in English, two polarity datasets were used. The first one (En-TD) by Ye et al. (2009) contains userwritten reviews on travel destinations. The dataset consists of approximately 600 positive and 591 negative reviews. Reviews were also manually sense annotated using WordNet 2.1. The sense annotation was performed by two annotators with an inter-annotation agreement of 93%. The second dataset (En-PD)3 on product reviews (music instruments) from Amazon by Blitzer et al. (2007) contains 1000 positive and 1000 negative reviews. This dataset was sense annotated using an automatic WSD engine which was trained on tourism domain (Khapra et al., 2010). Experiments using this dataset were done to study the effect of domain on CLSA. For experiments in Hindi and Marathi, polarity datasets by Balamurali et al. (2012) were used.4 These are reviews collected from various Hindi and Marathi blogs and Sunday editorials. Hindi dataset consist of 98 positive and 100 negative reviews. Whereas Marathi dataset contains 75 positive and 75 negative reviews. Apart from being marked with polarity labels at document level, they are also manually sense annotated using Hindi and Marathi WordNet respectively. CLSA: The same datasets used in SA are also used for CLSA. Three approaches (as described in section 4) were tested for English-Hindi and English-Marathi language pairs. To create alignments, English-Hindi and English-Marathi parallel corpora from ILCI were used. EnglishHindi parallel corpus contains 45992 sentences and English-Marathi parallel corpus contains 47881 sentences. To create alignments, GIZA++5 was used (Och and Ney, 2003). As a preprocessing step, all stop words were removed. Stemming was performed on English and Hindi whereas for Marathi data, Morphological Analyzer was used to reduce the words to their respective lemmas. All experiments were performed using C-SVM 3http://www.cs.jhu.edu/˜mdredze/ datasets/sentiment/ 4http://www.cfilt.iitb.ac. in/resources/senti/MPLC_tour_ downloaderInfo.php 5http://www-i6.informatik.rwth-aachen. de/Colleagues/och/software/GIZA++.html 416 Features En-TD En-PD Hi Mar Words 87.02 77.60 77.36 92.28 WordNet Sense (Paradigmatic) 89.13 74.50 85.80 96.88 Clusters (Syntagmatic) 97.45 87.80 83.50 ✠ 98.66 Table 1: Classification accuracy for monolingual sentiment analysis. For English, results are reported on two publicly available datasets based on Travel Domain (TD) and Product Domain (PD). Features Words Clust-200 Clust-500 Clust-1000 Clust-1500 Clust-2000 Clust-2500 Clust-3000 En-TD 87.02 97.37 97.45 96.94 96.94 96.52 96.52 96.52 En-PD 77.60 73.20 82.30 84.30 86.35 86.45 87.80 87.40 Table 2: Classification accuracy (in %) versus cluster size (number of clusters to be used). (linear kernel with parameter optimized over training set using 5 fold cross validation) available as a part of LibSVM package6. SVM was used since it is known to perform well for sentiment classification (Pang et al., 2002). Results reported are based on the average of ten-fold crossvalidation accuracies. Standard text metrics are used for reporting the experimental results. 6 Results Monolingual classification results are shown in Table71. Table shows accuracies of SA systems developed on feature set based on words, senses and clusters. It must be noted that accuracies reported for cluster based features are with respect to the best accuracy based on different cluster sizes. The improvements in results of cluster features based approach is found to be statistically significant over the word features based approach and sense features based approach at 95% confidence level when tested using a paired t-test (except for Hindi cluster features based approach). But in general, their accuracies do not significantly vary after cluster size crosses 1500. Table 2 shows the classification accuracy variation when cluster size is altered. For, En-TD and En-PD experiments, the cluster size was varied between 200-3000 with an interval of 500 (after a size of 500). In the En-TD experiment, the best accuracy is achieved for cluster size 500, which is lesser than the number of unique-words/unique-senses (6435/6004) present in the data. Similarly, for the En-PD experiment, 6http://www.csie.ntu.edu.tw/˜cjlin/ libsvm 7All results reported here are based on 10-fold except for Marathi (2-fold-5-repeats), as it had comparatively lesser data samples. the optimal cluster size of 2500 is also lesser than the number of unique-words/unique-senses (30468/4735) present in the data. To see the effect of training data size variation for different SA approaches in the En-TD experiment, the training data size is varied between 50 to 500. For this, a test set consisting of 100 positive and 100 negative documents is fixed. The training data size is varied by selecting different number of documents from rest of the dataset (∼500 negative and ∼500 positive) as a training set. For each training data set 10 repeats are performed, e.g., for training data size of 50, 50 negative and 50 positive documents are randomly selected from the training data pool of ∼500 negative and ∼500 positive. This was repeated 10 times (with replacement). The results of this experiment are presented in Figure 1. 70 75 80 85 90 95 100 0 100 200 300 400 500 Accuracy(%) Training data size Words Senses (Paradigmatic) Clusters (Syntagmatic) Figure 1: Training data variation on En-TD dataset. Cross-lingual SA accuracies are presented in Table 3. As in monolingual case, the reported accuracies are for features based on the best cluster size. 417 Target Language MT PS DCL XC T=Hi 63.13 53.80 51.51 66.16 T=Mar NA 54.00 56.00 60.30 Table 3: Cross-Lingual SA accuracy (%) on T=Hi and T=Mar with S=En for different approaches (MT=Machine Translation, PS=Projection based on Sense, DCL=Direct Cluster Linking , XC=CrossLingual Clustering. There is no MT system available for (S=En, T=Mar). 7 Discussions In this section, some important observations from the results are discussed. 1. Syntagmatic analysis may be used in lieu of paradigmatic analysis for SA: The results suggest that word cluster based features using syntagmatic analysis is comparatively better than cluster (sense) based features using paradigmatic analysis. For two datasets in English and for the one in Marathi this holds true. For English, the gap between classification accuracy based on sense features and cluster features is around 10%. A state-of-art accuracy is obtained for the public dataset on travel domain (En-TD). The difference in accuracy reduces as the language gets morphologically rich. In a morphologically rich language, morphology encompasses syntactical information, limiting the context it can provide for clustering. This can be seen from the classification results on Marathi. However for Hindi, classifier built on features based on syntagmatic analysis trails the one based on paradigmatic analysis. Compared to Marathi, Hindi is a less morphologically rich language, hence, a better result was expected. However, a contrary result was obtained.✠In Hindi, the subject and the object of the sentence are linked using a case marker. Upon error analysis, it was found that there was a lot of irregular compounding based on case markers. Case markers were compounded with the succeeding word. This is a deviation from the real scenario which would have resulted in incorrect clustering leading to an unexpected result. However, the same would not have occurred for a classifier developed on sense based features as it was manually sense tagged. Clustering induces a reduction in the data sparsity. For example, on En-PD, percentage of features present in the test set and not present in the training set to those present in the test set are 34.17%, 11.24%, 0.31% for words, synsets and cluster based features respectively. The improvement in the performance of classifiers may be attributed to this feature size reduction. However, it must be noted that clustering based on unlabelled corpora is less taxing than manually creating paradigmatic property based clusters like WordNet synsets. Barring one instance, both cluster based features outperform word based features. The reason for the drop in the accuracy of approach based on sense features for En-PD dataset is the domain specific nature of sentiment analysis (Blitzer et al., 2007), which is explained in the next point. 2. Domain issues are resolved while using cluster based features: For En-PD, the classifier developed using sense features based on paradigmatic analysis performs inferior to word based features. Compared to other datasets used for analysis, this dataset was sense annotated using an automatic WSD engine. This engine was trained on a travel domain corpus and as WSD is also domain specific, the final classification performance suffered. Additionally, as the target domain was on products, the automatic WSD engine employed had an in-domain accuracy of 78%. The sense disambiguation accuracy of the same would have lowered in a cross-domain setting. This might have had a degrading effect on the SA accuracy. However, it was seen that classifier developed on cluster features based on syntagmatic analysis do not suffer from this. Such clusters obliterate domain relates issues. In addition, as more unlabelled data is included for clustering, the classification accuracy improves.8 Thus, clustering may be employed to tackle other specific domain related issues in SA. 8It was observed that adding 0.1 million unlabelled documents, SA accuracy improved by 1%. This was observed in the case of English for which there is abundant unlabelled corpus. 418 3. Cluster based features using syntagmatic analysis requires lesser training data: Cluster based features drastically reduces the dimension of the feature vector. For instance, the size of sense based features for En-TD dataset was 1/6th of the size of word based features. This reduces the perplexity of the classification model. The reduction in the perplexity leads to the reduction of training documents to attain the same classification accuracy without any dimensionality reduction. This is evident from Figure 1 where accuracy of the cluster features based on unlabelled corpora are higher even with lesser training data. 4. Effect of cluster size: The cluster size (number of clusters employed) has an implication on the purity of each cluster with respect to the application. The system performance improved upon increasing the cluster size and converged after attaining a certain level of accuracy. In general, it was found that the best classification accuracy was obtained for a cluster size between 1000 and 2500. As evident from Table 2, once the optimal accuracy is obtained, no significant changes were observed by increasing the cluster size. 5. Clustering based CLSA is effective: For target language as Hindi, CLSA accuracy based on cross-lingual clustering (syntagmatic) outperforms the one based on MT (refer to Table 3). This was true for the constraint clustering approach based on cross-lingual clustering. Whereas, sentiment classifier using sense (PS) or direct cluster linking (DCL) is not very effective. In case of PS approach, the coverage of the multidict was a problem. The number of a linkages between sense from English to Hindi is only around 1/3rd the size of Princeton WordNet (Fellbaum, 1998). Similarly in case of DCL approach, monolingual likelihood is different from the cross-lingual likelihood in terms of the linkages. 6. A note on CLSA for truly resource scarce languages: Note that there is no publicly available MT system for English to Marathi. Moreover, the digital content in Marathi language does not have a standard encoding format. This impedes the automatic crawling of the web for corpora creation for SA. Much manual effort has to be put to collect enough corpora for analysis. However, even in these languages, unlabelled corpora is easy to obtain. Marathi was chosen to depict a truly resource scarce SA scenario. Cluster features based classifier comparatively performed well with 60% classification accuracy. An MT based system would have suffered in this case as Marathi, as stated earlier, is a morphologically rich language and as compared to English, has a different word ordering. This could degrade the accuracy of the machine translation itself, limiting the performance of an MT based CLSA system. All this is obliterated by the use of a cluster based CLSA approach. Moreover, as more monolingual copora is added for clustering, the cross lingual cluster linkages could be refined. This can further boost the CLSA accuracy. 8 Conclusion and Future Work This paper explored feasibility of using word cluster based features in lieu of features based on WordNet senses for sentiment analysis to alleviate the problem of data sparsity. Abstractly, the motivation was to see if highly effective features based on paradigmatic property based clustering could be replaced with the inexpensive ones based on syntagmatic property for SA. The study was performed for both monolingual SA and cross-lingual SA. It was found that cluster features based on syntagmatic analysis are better than the WordNet sense features based on paradigmatic analysis for SA. Invesitgation revealed that a considerable decrease in the training data could be achieved while using such class based features. Moreover, as syntagma based word clusters are homogenous, it was able to address domain specific nature of SA as well. For CLSA, clusters linked together using unlabelled parallel corpora do away with the need of translating labelled corpora from one language to another using an intermediary MT system or bilingual dictionary. Such a method outperforms an MT based CLSA approach. Further, this approach was found to be useful in cases where there are no MT systems to perform CLSA and the language of analysis is truly resource scarce. Thus, wider implication of this study is that many widely spoken yet resource scare languages like Pashto, Sundanese, Hausa, Gujarati and Punjabi which do not have an MT system could now be analysed for sentiment. The approach presented here for CLSA will still require a parallel corpora. However, the size of the parallel corpora required 419 for CLSA can considerably be much lesser than the size of the parallel corpora required to train an MT system. A naive cluster linkage algorithm based on word alignments was used to perform CLSA. As a result, there were many erroneous linkages which lowered the final SA accuracy. Better clusterlinking approaches could be explored to alleviate this problem. There are many applications which use WordNet like IR, IE etc. It would be interesting to see if these could be replaced by clusters based on the syntagmatic property. References A. R. Balamurali, Aditya Joshi, and Pushpak Bhattacharyya. 2011. Harnessing wordnet senses for supervised sentiment classification. In Proceedings of EMNLP 2011, pages 1081–1091, Stroudsburg, PA, USA. A. R. Balamurali, Aditya Joshi, and Pushpak Bhattacharyya. 2012. Cross-lingual sentiment analysis for Indian languages using linked wordnets. In Proceedings of COLING 2012, pages 73–82, Mumbai, India. A. R. Balamurali, Mitesh M. Khapra, and Pushpak Bhattacharyya. 2013. Lost in translation: viability of machine translation for cross language sentiment analysis. In Proceedings of CICLing 2013, pages 38–49, Berlin, Heidelberg. Carmen Banea, Rada Mihalcea, Janyce Wiebe, and Samer Hassan. 2008. Multilingual subjectivity analysis using machine translation. In Proceedings of EMNLP 2008, pages 127–135, Honolulu, Hawaii. Farah Benamara, Sabatier Irit, Carmine Cesarano, Napoli Federico, and Diego Reforgiato. 2007. Sentiment analysis: Adjectives and adverbs are better than adjectives alone. In Proceedings of the International Conference on Weblogs and Social Media. John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In Proceedings of ACL 2007, pages 440– 447, Prague, Czech Republic. Julian Brooke, Milan Tofiloski, and Maite Taboada. 2009. Cross-linguistic sentiment analysis: From english to spanish. In Proceedings of the International Conference RANLP-2009, pages 50–54, Borovets, Bulgaria. Peter F. Brown, Peter V. deSouza, Robert L. Mercer, Vincent J. Della Pietra, and Jenifer C. Lai. 1992. Class-based n-gram models of natural language. Computational Linguistics, pages 467–479, December. D. Chandler. 2012. Semiotics for beginners. http://users.aber.ac.uk/dgc/ Documents/S4B/sem01.html. Online, accessed 20-February-2013. D. A. Cruse. 1986. Lexical Semantics. Cambridge University Press. Kushal Dave, Steve Lawrence, and David M. Pennock. 2003. Mining the peanut gallery: opinion extraction and semantic classification of product reviews. In Proceedings of WWW 2003, pages 519–528, New York, NY, USA. Kevin Duh, Akinori Fujino, and Masaaki Nagata. 2011. Is machine translation ripe for cross-lingual sentiment classification? In Proceedings of ACLHLT 2011, pages 429–433, Stroudsburg, PA, USA. Manaal Faruqui and Sebastian Pad´o. 2010. Training and Evaluating a German Named Entity Recognizer with Semantic Generalization. In Proceedings of KONVENS 2010, Saarbr¨ucken, Germany. Christiane Fellbaum. 1998. WordNet: An Electronic Lexical Database. Bradford Books. Hatem Ghorbel and David Jacot. 2011. Further experiments in sentiment analysis of french movie reviews. In Proceedings of AWIC 2011, pages 19–28, Fribourg, Switzerland. Gholamreza Haffari, Marzieh Razavi, and Anoop Sarkar. 2011. An ensemble model that combines syntactic and semantic clustering for discriminative dependency parsing. In Proceedings of ACL-HLT 2011, pages 710–714, Stroudsburg, PA, USA. Kanayama Hiroshi, Nasukawa Tetsuya, and Watanabe Hideo. 2004. Deeper sentiment analysis using machine translation technology. In Proceedings of COLING 2004, Stroudsburg, PA, USA. Daisuke Ikeda, Hiroya Takamura, Lev arie Ratinov, and Manabu Okumura. 2008. Learning to shift the polarity of words for sentiment classification. In Proceedings of the Third International Joint Conference on Natural Language Processing. Mitesh Khapra, Sapan Shah, Piyush Kedia, and Pushpak Bhattacharyya. 2010. Domain-specific word sense disambiguation combining corpus based and wordnet based parameters. In Proceedings of Global Wordnet Conference. Terry Koo, Xavier Carreras, and Michael Collins. 2008. Simple semi-supervised dependency parsing. In Proceedings of ACL-HLT 2008, pages 595–603, Columbus, Ohio. Percy Liang. 2005. Semi-supervised learning for natural language. M. eng. thesis, Massachusetts Institute of Technology. 420 Chenghua Lin and Yulan He. 2009. Joint sentiment/topic model for sentiment analysis. In Proceedings of CIKM 2009, pages 375–384, New York, NY, USA. Bin Lu, Chenhao Tan, Claire Cardie, and Benjamin K. Tsou. 2011. Joint bilingual sentiment classification with unlabeled parallel corpora. In Proceedings of ACL-HLT 2011, pages 320–330, Stroudsburg, PA, USA. Sven Martin, Jrg Liermann, and Hermann Ney. 1995. Algorithms for bigram and trigram word clustering. In Speech Communication, pages 1253–1256. Justin Martineau and Tim Finin. 2009. Delta TFIDF: An improved feature space for sentiment analysis. In Proceedings of ICWSM. Shotaro Matsumoto, Hiroya Takamura, and Manabu Okumura. 2005. Sentiment classification using word sub-sequences and dependency sub-trees. In Advances in Knowledge Discovery and Data Mining, Lecture Notes in Computer Science, pages 301– 311. Qiaozhu Mei, Xu Ling, Matthew Wondra, Hang Su, and ChengXiang Zhai. 2007. Topic sentiment mixture: modeling facets and opinions in weblogs. In Proceedings of WWW 2007, pages 171–180, New York, NY, USA. Scott Miller, Jethran Guinness, and Alex Zamanian. 2004. Name tagging with word clusters and discriminative training. In Proceedings of HLT-NAACL 2004: Main Proceedings, pages 337–342, Boston, Massachusetts, USA. Einat Minkov, Kristina Toutanova, and Hisami Suzuki. 2007. Generating complex morphology for machine translation. In Proceedings of ACL 2007, pages 128–135, Prague, Czech Republic. Rajat Mohanty, Pushpak Bhattacharyya, Prabhakar Pande, Shraddha Kalele, Mitesh Khapra, and Aditya Sharma. 2008. Synset based multilingual dictionary: Insights, applications and challenges. In Proceedings of Global Wordnet Conference. Tony Mullen and Nigel Collier. 2004. Sentiment analysis using support vector machines with diverse information sources. In Proceedings of EMNLP 2004, pages 412–418, Barcelona, Spain. Tetsuji Nakagawa, Kentaro Inui, and Sadao Kurohashi. 2010. Dependency tree-based sentiment classification using crfs with hidden variables. In Proceedings of HLT-NAACL 2010, pages 786–794, Stroudsburg, PA, USA. Vincent Ng, Sajib Dasgupta, and S. M. Niaz Arifin. 2006. Examining the role of linguistic knowledge sources in the automatic identification and classification of reviews. In Proceedings of the COLING 2006, pages 611–618, Stroudsburg, PA, USA. Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19–51, March. Bo Pang and Lillian Lee. 2002. Thumbs up? sentiment classification using machine learning techniques. In Proceedings of EMNLP 2002, pages 79– 86, Stroudsburg, PA, USA. Bo Pang and Lillian Lee. 2008. Opinion mining and sentiment analysis. Foundations and Trends in Information Retrieval, 2(1-2):1–135, January. Vassiliki Rentoumi, George Giannakopoulos, Vangelis Karkaletsis, and George A. Vouros. 2009. Sentiment analysis of figurative language using a word sense disambiguation approach. In Proceedings of RANLP 2009, pages 370–375, Borovets, Bulgaria, September. Niall Rooney, Hui Wang, Fiona Browne, Fergal Monaghan, Jann Mller, Alan Sergeant, Zhiwei Lin, Philip Taylor, and Vladimir Dobrynin. 2011. An exploration into the use of contextual document clustering for cluster sentiment analysis. In Proceedings of RANLP 2011, pages 140–145, Hissar, Bulgaria. C. Samuelsson and W. Reichl. 1999. A class-based language model for large-vocabulary speech recognition extracted from part-of-speech statistics. In Proceedings of ICASSP 1999, pages 537–540. Sara Stymne. 2012. Clustered word classes for preordering in statistical machine translation. In Proceedings of the Joint Workshop on Unsupervised and Semi-Supervised Learning in NLP, pages 28–34. Oscar T¨ackstr¨om, Ryan McDonald, and Jakob Uszkoreit. 2012. Cross-lingual Word Clusters for Direct Transfer of Linguistic Structure. In Proceedings of NAACL-HLT 2012, pages 477–487, Montr´eal, Canada. Martin Tamara, Balahur Alexandra, and Montoyo Andres. 2010. Word sense disambiguation in opinion mining: Pros and cons. Journal Research in Computing Science, 46:119–130. Stephen Tratz and Eduard Hovy. 2011. A fast, accurate, non-projective, semantically-enriched parser. In Proceedings of EMNLP 2011, pages 1257–1268, Stroudsburg, PA, USA. Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: a simple and general method for semi-supervised learning. In Proceedings of ACL 2010, pages 384–394, Stroudsburg, PA, USA. Peter D. Turney. 2002. Thumbs up or thumbs down?: semantic orientation applied to unsupervised classification of reviews. In Proceedings of ACL 2002, pages 417–424, Stroudsburg, PA, USA. 421 Jakob Uszkoreit and Thorsten Brants. 2008. Distributed word clustering for large scale class-based language modeling in machine translation. In Proceedings of ACL-HLT 2008, pages 755–762, Columbus, Ohio. Xiaojun Wan. 2009. Co-training for cross-lingual sentiment classification. In Proceedings of ACL 2009, pages 235–243, Stroudsburg, PA, USA. Gui-Rong Xue, Chenxi Lin, Qiang Yang, WenSi Xi, Hua-Jun Zeng, Yong Yu, and Zheng Chen. 2005. Scalable collaborative filtering using cluster-based smoothing. In Proceedings of SIGIR 2005, pages 114–121, New York, NY, USA. Qiang Ye, Ziqiong Zhang, and Rob Law. 2009. Sentiment classification of online reviews to travel destinations by supervised machine learning approaches. Expert Systems with Applications, 36(3, Part 2):6527–6535. Zhongwu Zhai, Bing Liu, Hua Xu, and Peifa Jia. 2011. Clustering product features for opinion mining. In Proceedings of WSDM 2011, pages 347–354, New York, NY, USA. Yue Zhang and Joakim Nivre. 2011. Transitionbased dependency parsing with rich non-local features. In Proceedings of ACL-HLT 2011, pages 188– 193, Stroudsburg, PA, USA. 422
2013
41
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 423–433, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Large-scale Semantic Parsing via Schema Matching and Lexicon Extension Qingqing Cai Temple University Computer and Information Sciences [email protected] Alexander Yates Temple University Computer and Information Sciences [email protected] Abstract Supervised training procedures for semantic parsers produce high-quality semantic parsers, but they have difficulty scaling to large databases because of the sheer number of logical constants for which they must see labeled training data. We present a technique for developing semantic parsers for large databases based on a reduction to standard supervised training algorithms, schema matching, and pattern learning. Leveraging techniques from each of these areas, we develop a semantic parser for Freebase that is capable of parsing questions with an F1 that improves by 0.42 over a purely-supervised learning algorithm. 1 Introduction Semantic parsing is the task of translating natural language utterances to a formal meaning representation language (Chen et al., 2010; Liang et al., 2009; Clarke et al., 2010; Liang et al., 2011; Artzi and Zettlemoyer, 2011). There has been recent interest in producing such semantic parsers for large, heterogeneous databases like Freebase (Krishnamurthy and Mitchell, 2012; Cai and Yates, 2013) and Yago2 (Yahya et al., 2012), which has driven the development of semi-supervised and distantlysupervised training methods for semantic parsing. Previous purely-supervised approaches have been limited to smaller domains and databases, such as the GeoQuery database, in part because of the cost of labeling enough samples to cover all of the logical constants involved in a domain. This paper investigates a reduction of the problem of building a semantic parser to three standard problems in semantics and machine learning: supervised training of a semantic parser, schema matching, and pattern learning. Figure 1 provides a visualization of our system architecture. We apply an existing supervised training algorithm for semantic parsing to a labeled data set. We (sentence, logical form) Training data Test questions Freebase Web Relations Extracted from Web Supervised Semantic Parser Learning MATCHER LEXTENDER Freebase PCCG Grammar and Lexicon (word, Freebase symbol) correspondences High-Coverage Freebase PCCG Grammar and Lexicon Figure 1: We reduce the task of learning a largescale semantic parser to a combination of 1) a standard supervised algorithm for learning semantic parsers; 2) our MATCHER algorithm for finding correspondences between words and database symbols; and 3) our LEXTENDER algorithm for integrating (word, database symbol) matches into a semantic parsing lexicon. apply schema matching techniques to the problem of finding correspondences between English words w and ontological symbols s. And we apply pattern learning techniques to incorporate new (w, s) pairs into the lexicon of the trained semantic parser. This reduction allows us to apply standard techniques from each problem area, which in combination provide a large improvement over the purely-supervised approaches. On a dataset of 917 questions taken from 81 domains of the Freebase database, a standard learning algorithm for semantic parsing yields a parser with an F1 of 0.21, in large part because of the number of logical symbols that appear during testing but never appear during training. Our techniques can extend this parser to new logical symbols through schema matching, and yield a semantic parser with an F1 of 0.63 on the same task. On a more challenging task where training and test data are divided so that all logical constants in test are never observed dur423 ing training, our approach yields a semantic parser with an F1 of 0.6, whereas the purely supervised approach cannot parse a single test question correctly. These results indicate that it is possible to automatically extend semantic parsers to symbols for which little or no training data has been observed. The rest of this paper is organized as follows. The next section discusses related work. Section 3 describes our MATCHER algorithm for performing schema matching between a knowledge base and text. Section 4 explains how we use MATCHER’s schema matching to extend a standard semantic parser to logical symbols for which it has seen no labeled training data. Section 5 analyzes the performance of MATCHER and our semantic parser. Section 6 concludes. 2 Previous Work Two existing systems translate between natural language questions and database queries over large-scale databases. Yahya et al. (2012) report on a system for translating natural language queries to SPARQL queries over the Yago2 (Hoffart et al., 2013) database. Yago2 consists of information extracted from Wikipedia, WordNet, and other resources using manually-defined extraction patterns. The manual extraction patterns pre-define a link between natural language terms and Yago2 relations. Our techniques automate the process of identifying matches between textual phrases and database relation symbols, in order to scale up to databases with more relations, like Freebase. A more minor difference between Yahya et al.’s work and ours is that their system handles SPARQL queries, which do not handle aggregation queries like argmax and count. We rely on an existing semantic parsing technology to learn the language that will translate into such aggregation queries. On the other hand, their test questions involve more conjunctions and complex semantics than ours. Developing a dataset with more complicated semantics in the queries is part of our ongoing efforts. Krishnamurthy and Mitchell (2012) also create a semantic parser for Freebase covering 77 of Freebase’s over 2000 relations. Like our work, their technique uses distant supervision to drive training over a collection of sentences gathered from the Web, and they do not require any manually-labeled training data. However, their technique does require manual specification of rules that construct CCG lexical entries from dependency parses. In comparison, we fully automate the process of constructing CCG lexical entries for the semantic parser by making it a prediction task. We also leverage synonym-matching techniques for comparing relations extracted from text with Freebase relations. Finally, we test our results on a dataset of 917 questions covering over 600 Freebase relations, a more extensive test than the 50 questions used by Krishnamurthy and Mitchell. Numerous methods exist for comparing two relations based on their sets of tuples. For instance, the DIRT system (Lin and Pantel, 2001) uses the mutual information between the (X, Y ) argument pairs for two binary relations to measure the similarity between them, and clusters relations accordingly. More recent examples of similar techniques include the Resolver system (Yates and Etzioni, 2009) and Poon and Domingos’s USP system (Poon and Domingos, 2009). Our techniques for comparing relations fit into this line of work, but they are novel in their application of these techniques to the task of comparing database relations and relations extracted from text. Schema matching (Rahm and Bernstein, 2001; Ehrig et al., 2004; Giunchiglia et al., 2005) is a task from the database and knowledge representation community in which systems attempt to identify a “common schema” that covers the relations defined in a set of databases or ontologies, and the mapping between each individual database and the common schema. Owing to the complexity of the general case, researchers have resorted to defining standard similarity metrics between relations and attributes, as well as machine learning algorithms for learning and predicting matches between relations (Doan et al., 2004; Wick et al., 2008b; Wick et al., 2008a; Nottelmann and Straccia, 2007; Berlin and Motro, 2006). These techniques consider only matches between relational databases, whereas we apply these ideas to matches between Freebase and extracted relations. Schema matching in the database sense often considers complex matches between relations (Dhamanka et al., 2004), whereas as our techniques are currently restricted to matches involving one database relation and one relation extracted from text. 424 3 Textual Schema Matching 3.1 Problem Formulation The textual schema matching task is to identify natural language words and phrases that correspond with each relation and entity in a fixed schema for a relational database. To formalize this task, we first introduce some notation. A schema S = (E, R, C, I) consists of a set of entities E, a set of relations R, a set of categories C, and a set of instances I. Categories are one-argument predicates (e.g., film(e)), and relations are two- (or more-) argument predicates (e.g., directed by(e1, e2)). Instances are known tuples of entities that make a relation or category true, such as film(Titanic) or directed by(Titanic, James Cameron). For a given r ∈R (or c ∈C), IS(r) indicates the set of known instances of r in schema S (and likewise for IS(c)). Examples of such schemas include Freebase (Bollacker et al., 2008) and Yago2 (Hoffart et al., 2013). We say a schema is a textual schema if it has been extracted from free text, such as the Nell (Carlson et al., 2010) and ReVerb (Fader et al., 2011) extracted databases. Given a textual schema T and a database schema D, the textual schema matching task is to identify an alignment or matching M ⊂RT ×RD such that (rT , rD) ∈M if and only if rT can be used to refer to rD in normal language usage. The problem would be greatly simplified if M were a 1-1 function, but in practice most database relations can be referred to in many ways by natural language users: for instance, film actor can be referenced by the English verbs “played,” “acted,” and “starred,” along with morphological variants of them. In addition, many English verbs can refer to several different relations in Freebase: “make” can refer to computer processor manufacturer or distilled spirits producer, among many others. Our MATCHER algorithm for textual schema matching handles this by producing a confidence score for every possible (rT , rD) pair, which downstream applications can then use to reason about the possible alignments. Even worse than the ambiguities in alignment, some textual relations do not correspond with any database relation exactly, but instead they correspond with a projection of a relation, or a join between multiple relations, or another complex view of a database schema. As a simple example, “actress” corresponds to a subset of the Freebase film actor relation that intersects with the set {x: gender(x, female)}. MATCHER can only determine that “actress” aligns with film actor or not; it cannot produce an alignment between “actress” and a join of film actor and gender. These more complex alignments are an important consideration for future work, but as our experiments will show, quite useful alignments can be produced without handling these more complex cases. 3.2 Identifying candidate matches MATCHER uses a generate-and-test architecture for determining M. It uses a Web search engine to issue queries for a database relation rD consisting of all the entities in a tuple t ∈ID(rD). 1000 tuples for each rD are randomly chosen for issuing queries. The system then retrieves matching snippets from the search engine results. It uses the top 10 results for each search engine query. It then counts the frequency of each word type in the set of retrieved snippets for rD. The top 500 nonstopword word types are chosen as candidates for matches with rD. We denote the candidate set for rD as C(rD). MATCHER’s threshold of 500 candidates for C(rD) results in a maximum possible recall of just less than 0.8 for the alignments in our dataset, but even if we double the threshold to 1000, the recall improves only slightly to 0.82. We therefore settled on 500 as a point with an acceptable upper bound on recall, while also producing an acceptable number of candidate terms for further processing. 3.3 Pattern-based match selection The candidate pool C(rD) of 500 word types is significantly smaller than the set of all textual relations, but it is also extremely noisy. The candidates may include non-relation words, or other frequent but unrelated words. They may also include words that are highly related to rD, but not actually corresponding textual relations. For instance, the candidate set for film director in Freebase includes words like “directed,” but also words like “film,” “movie,” “written,” “produced,” and “starring.” We use a series of filters based on synonym-detection techniques to help select the true matching candidates from C(rD). 425 Pattern Condition Example 1. “rT in E” rT ends with “-ed” and E has type datetime or location “founded in 1989” 2. “rT by E” rT ends with “-ed” “invented by Edison” 3. “rT such as E” rT ends with “-s” “directors such as Tarantino” 4. “E is a(n) rT ” all cases “Paul Rudd is an actor” Table 1: Patterns used by MATCHER as evidence of a match between rD and rT . E represents an entity randomly selected from the tuples in ID(rD). The first type of evidence we consider for identifying true matches from C(rD) consists of pattern-matching. Relation words that express rD will often be found in complex grammatical constructions, and often they will be separated from their entity arguments by long-distance dependencies. However, over a large corpus, one would expect that in at least some cases, the relation word will appear in a simple, relatively-unambiguous grammatical construction that connects rT with entities from rD. For instance, entities e from the relationship automotive designer appear in the pattern “designed by e” more than 100 times as often as the next most-common patterns, “considered by e” and “worked by e.” MATCHER use searches over the Web to count the number of instances where a candidate rT appears in simple patterns that involve entities from rD. Greater counts for these patterns yield greater evidence of a correct match between rD and rT . Table 1 provides a list of patterns that we consider. For each rD and each rT ∈C(rD), MATCHER randomly selects 10 entities from rD’s tuples to include in its pattern queries. Two of the patterns are targeted at past-tense verbs, and the other two patterns at nominal relation words. MATCHER computes statistics similar to pointwise mutual information (PMI) (Turney, 2001) to measure how related rD and rT are, for each pattern p. Let c(p, rD, rT ) indicate the sum of all the counts for a particular pattern p, database relation, and textual relation: fp(rT , rD) = c(p, rD, rT ) X r′ D c(p, r′ D, rT ) ∗ X r′ T c(p, rD, r′ T ) For the sum over all r′ D, we use all r′ D in Freebase for which rT was extracted as a candidate. One downside of the pattern-matching evidence is the sheer number of queries it requires. Freebase currently has over 2,000 relations. For each rD, we have up to 500 candidate rT , up to 4 patterns, and up to 10 entities per pattern. To cover all of Freebase, MATCHER needs 2, 000×500×4×10 = 40 million queries, or just over 1.25 years if it issues 1 query per second (we covered approximately one-quarter of Freebase’s relations in our experiments). Using more patterns and more entities per pattern are desirable for accumulating more evidence about candidate matches, but there is a trade-off with the time required to issue the necessary queries. 3.4 Comparing database relations with extracted relations Open Information Extraction (Open IE) systems (Banko et al., 2007) can often provide a large set of extracted tuples for a given rT , which MATCHER can then use to make much more comprehensive comparisons with the full tuple set for rD than the pattern-matching technique allows. MATCHER employs a form of PMI to compute the degree of relatedness between rD and rT . In its simplest form, MATCHER computes: PMI(rT , rD) = |ID(rD) ∩IT (rT )| |ID(rD)| · |IT (rT )| (1) While this PMI statistic is already quite useful, we have found that in practice there are many cases where an exact match between tuples in ID(rD) and tuples in IT (rT ) is too strict of a criterion. MATCHER uses a variety of approximate matches to compute variations of this statistic. Considered as predictors for the true matches in M, these variations of the PMI statistic have a lower precision, in that they are more likely to have high values for incorrect matches. However, they also have a higher recall: that is, they will have a high value for correct candidates in C(rD) when the strict version of PMI does not. Table 2 lists all the variations used by MATCHER. 426 Statistics for (rT , rD) sκ(rT , rD) = X tD∈ID(rD) X tT ∈IT (rT ) κ(tD, tT ) |ID(rD)|·|IT (rT )| s′ κ(rT , rD) = sκ(rT ,rD) X r′ D sκ(r′ D, rT ) s′′(rT , rD) = |IT (rT )| |ID(rD)| Table 2: MATCHER statistics: for each κ function for comparing two tuples (given in Table 3), MATCHER computes the statistics above to compare rD and rT . The PMI statistic in Equation 1 corresponds to sκ where κ =strict match over Φ =full tuples. κ(t1, t2) for comparing tuples t1, t2 strict match: ( 1, if Φ(t1) = Φ′(t2) 0, otherwise. type match:      1, if ∀kcat(Φ(t1)k) = cat(Φ′(t2)k) 0, otherwise. Table 3: MATCHER’s κ functions for computing whether two tuples are similar. cat maps an entity to a category (or type) in the schema. MATCHER has a different κ function for each possible combination of Φ and Φ′ functions, which are given in Table 4. MATCHER uses an API for the ReVerb Open IE system1 (Fader et al., 2011) to collect I(rT ), for each rT . The API for ReVerb allows for relational queries in which some subset of the entity strings, entity categories, and relation string are specified. The API returns all matching triples; types must match exactly, but relation or argument strings in the query will match any relation or argument that contains the query string as a substring. MATCHER queries ReVerb with three different types of queries for each rT , specifying the types for both arguments, or just the type of the first argument, or just the second argument. Types for arguments are taken from the types of arguments for a potentially matching rD in Freebase. To avoid overwhelming the ReVerb servers, for our experiments we limited MATCHER to queries 1http://openie.cs.washington.edu/ Φ(t) for tuple t = (e1, . . . , en) ∀iei (projection to one dimension) (e1, . . . , en) (full tuple) ∀σ(·)(eσ(1), . . . , eσ(n)) (permutation) Table 4: MATCHER’s Φ functions for projecting or permuting a tuple. σ indicates a permutation of the indices. for the top 80 rT ∈C(rD), when they are ranked according to frequency during the candidate identification process. 3.5 Regression models for scoring candidates Pattern statistics, the ReVerb statistics from Table 2, and the count of rT during the candidate identification step all provide evidence for correct matches between rD and rT . MATCHER uses a regression model to combine these various statistics into a score for (rT , rD). The regression model is a linear regression with least-squares parameter estimation; we experimented with support vector regression models with non-linear kernels, with no significant improvements in accuracy. Section 5 explains the dataset we use to train this model. Unlike a classifier, MATCHER does not output any single matching M. However, downstream applications can easily convert MATCHER’s output into a matching M by, for instance, selecting the top K candidate rT values for each rD, or by selecting all (rT , rD) pairs with a score over a chosen threshold. Our experiments analyze MATCHER’s success by comparing its performance across a range of different values for the number of rT matches for each rD. 4 Extending a Semantic Parser Using a Schema Alignment An alignment between textual relations and database relations has many possible uses: for example, it might be used to allow queries over a database to be answered using additional information stored in an extracted relation store, or it might be used to deduce clusters of synonymous relation words in English. Here, we describe an application in which we build a questionanswering system for Freebase by extending a standard learning technique for semantic parsing with schema alignment information. As a starting point, we used the UBL system 427 developed by Kwiatkowski et al. (2010) to learn a semantic parser based on probabilistic Combinatory Categorial Grammar (PCCG). Source code for UBL is freely available. Its authors found that it achieves results competitive with the state-of-the-art on a variety of standard semantic parsing data sets, including Geo250 English (0.85 F1). Using a fixed CCG grammar and a procedure based on unification in second-order logic, UBL learns a lexicon Λ from the training data which includes entries like: Example Lexical Entries New York City ⊢NP : new york neighborhoods in ⊢ S\NP/NP : λxλy.neighborhoods(x, y) Example CCG Grammar Rules X/Y : f Y : g ⇒X : f(g) Y : g X\Y : f ⇒X : f(g) Using Λ, UBL selects a logical form z for a sentence S by selecting the z with the most likely parse derivations y: h(S) = arg maxz P y p(y, z|x; θ, Λ). The probabilistic model is a log-linear model with features for lexical entries used in the parse, as well as indicator features for relation-argument pairs in the logical form, to capture selectional preferences. Inference (parsing) and parameter estimation are driven by standard dynamic programming algorithms (Clark and Curran, 2007), while lexicon induction is based on a novel search procedure through the space of possible higher-order logic unification operations that yield the desired logical form for a training sentence. Our Freebase data covers 81 of the 86 core domains in Freebase, and 635 of its over 2000 relations, but we wish to develop a semantic parser that can scale to all of Freebase. UBL gets us part of the way there, by inducing a PCCG grammar, as well as lexical entries for function words that must be handled in all domains. It can also learn lexical entries for relations rD that appear in the training data. However, UBL has no way to learn lexical entries for the many valid (rT , rD) pairs that do not appear during training. We use MATCHER’s learned alignment to extend the semantic parser that we get from UBL by automatically adding in lexical entries for Freebase relations. Essentially, for each (rT , rD) from MATCHER’s output, we wish to construct a lexical entry that states that rT ’s semantics resembles λxλy.rD(x, y). However, this simple process is complicated by the fact that the semantic parser requires two additional types of information for each lexical entry: a syntactic category, and a weight. Furthermore, for many cases the appropriate semantics are significantly more complex than this pattern. To extend the learned semantic parser to a semantic parser for all of Freebase, we introduce a prediction task, which we call semantic lexicon extension: given a matching M together with scores for each pair in M, predict the syntactic category Syn, lambda-calculus semantics Sem, and weight W for a full lexical entry for each (rT , rD) ∈M. One advantage of the reduction approach to learning a semantic parser is that we can automatically construct training examples for this prediction task from the other components in the reduction. We use the output lexical entries learned by UBL as (potentially noisy) examples of true lexical entries for (rT , rD) pairs where rT matches the word in one of UBL’s lexical entries, and rD forms part of the semantics in the same lexical entry. For (rT , rD) pairs in M where rD occurs in UBL’s lexical entries, but not paired with rT , we create dummy “negative” lexical entries with very low weights, one for each possible syntactic category observed in all lexical entries. Note that in order to train LEXTENDER, we need the output of MATCHER for the relations in UBL’s training data, as well as UBL’s output lexicon from the training data. Our system for this prediction task, which we call LEXTENDER (for Lexicon eXtender), factors into three components: P(Sem|rD, rT , score), P(Syn|Sem, rD, rT , score), and P(W|Syn, Sem, rD, rT , score). This factorization is trivial in that it introduces no independence assumptions, but it helps in designing models for the task. We set the event space for random variable Sem to be the set of all lambda calculus expressions observed in UBL’s output lexicon, modulo the names of specific Freebase relations. For instance, if the lexicon includes two entries whose semantics are λxλy . film actor(x, y) and λxλy . book author(x, y), the event space would include the single expression in which relations film actor and book author were replaced by 428 a new variable: λpλxλy.p(x, y). The final semantics for a lexical entry is then constructed by substituting rD for p, or more formally, by a function application Sem(rD). The event space for Syn consists of all syntactic categories in UBL’s output lexicon, and W ranges over R. LEXTENDER’s model for Sem and Syn are Na¨ıve Bayes classifiers (NBC), with features for the part-of-speech for rT (taken from a POS tagger), the suffix of rT , the number of arguments of rD, and the argument types of rD. For Syn, we add a feature for the predicted value of Sem. For W, we use a linear regression model whose features are the score from MATCHER, the probabilities from the Syn and Sem NBC models, and the average weight of all lexical entries in UBL with matching syntax and semantics. Using the predictions from these models, LEXTENDER extends UBL’s learned lexicon with all possible lexical entries with their predicted weights, although typically only a few lexical entries have high enough weight to make a difference during parsing. Pruning entries with low weights could improve the memory and time requirements for parsing, but these were not an issue in our experiments, so we did not investigate this further. 5 Experiments We conducted experiments to test the ability of MATCHER and LEXTENDER to produce a semantic parser for Freebase. We first analyze MATCHER on the task of finding matches between Freebase relations and textual relations. We then compare the performance of the semantic parser learned by UBL with its extension provided by LEXTENDER on a dataset of English questions posed to Freebase. 5.1 Experimental Setup Freebase (Bollacker et al., 2008) is a free, online, user-contributed, relational database (www.freebase.com) covering many different domains of knowledge. The full schema and contents are available for download. The “Freebase Commons” subset of Freebase, which is our focus, consists of 86 domains, an average of 25 relations per domain (total of 2134 relations), and 615,000 known instances per domain (53 million instances total). As a reference point, the GeoQuery database — which is a standard benchmark database for semantic parsing — Examples 1. What are the neighborhoods in New York City? λx . neighborhoods(new york, x) 2. How many countries use the rupee? count(x) . countries used(rupee, x) 3. How many Peabody Award winners are there? count(x) . ∃y . award honor(y) ∧ award winner(y, x) ∧ award(y, peabody award) Figure 2: Example questions with their logical forms. The logical forms make use of Freebase symbols as logical constants, as well as a few additional symbols such as count and argmin, to allow for aggregation queries. contains a single domain (geography), 8 relations, and 880 total instances. Our dataset contains 917 questions (on average, 6.3 words per question) and a meaning representation for each question written in a variant of lambda calculus2. 81 domains are represented in the data set, and the lambda calculus forms contain 635 distinct Freebase relations. The most common domains, film and business, each took up no more than 6% of the overall dataset. Several examples are listed in Fig. 2. The questions were provided by two native English speakers. No restrictions were placed on the type of questions they should produce, except that they should produce questions for multiple domains. By inspection, a large majority of the questions appear to be answerable from Freebase, although no instructions were given to restrict questions to this sort. We also created a dataset of alignments from these annotated questions by creating an alignment for each Freebase relation mentioned in the logical form for a question, paired with a manually-selected word from the question. 5.2 Alignment Tests We measured the precision and recall of MATCHER’s output against the manually labeled data. Let M be the set of (rT , rD) matches produced by the system, and G the set of matches in the gold-standard manual data. We define 2The data is available from the second author’s website. 429 0 0.1 0.2 0.3 0.4 0.5 0.6 0 0.2 0.4 0.6 0.8 1 Precision Recall Alignment Predictions Matcher Extractions Pattern Frequency Figure 3: MATCHER’s Pattern features and Extractions features complement one another, so that in combination they outperform either subset on its own, especially at the high-recall end of the curve. precision and recall as: P = |M ∩G| |M| , R = |M ∩G| |G| Figure 3 shows a Precision-Recall (PR) curve for MATCHER and three baselines: a “Frequency” model that ranks candidate matches for rD by their frequency during the candidate identification step; a “Pattern” model that uses MATCHER’s linear regression model for ranking, but is restricted to only the pattern-based features; and an “Extractions” model that similarly restricts the ranking model to ReVerb features. We have three folds in our data; the alignments for relation rD in one fold are predicted by models trained on the other two folds. Once all of the alignments in all three folds are scored, we generate points on the PR curve by applying a threshold to the model’s ranking, and treating all alignments above the threshold as the set of predicted alignments. All regression models for learning alignments outperform the Frequency ranking by a wide margin. The Pattern model outperforms the Extractions model at the high-precision, low-recall end of the curve. At the high-recall points, the Pattern model drops quickly in precision. However, the combination of the two kinds of features in MATCHER yields improved precision at all levels of recall. 5.3 Semantic Parsing Tests While our alignment tests can tell us in relative terms how well different models are performing, it is difficult to assess these models in absolute terms, since alignments are not typical applications that people care about in their own right. We now compare our alignments on a semantic parsing task for Freebase. In a first semantic parsing experiment, we train UBL, MATCHER, and LEXTENDER on a random sample of 70% of the questions, and test them on the remaining 30%. In a second test, we focus on the hard case where all questions from the test set contain logical constants that have never been seen before during training. We split the data into 3 folds, making sure that no Freebase domain has symbols appearing in questions in more than one fold. We then perform 3-fold crossvalidation for all of our supervised models. We varied the number of matches that the alignment model (MATCHER, Pattern, Extractions, or Frequency) could make for each Freebase relation, and measured semantic parsing performance as a function of the number of matches. Figure 4 shows the F1 scores for these semantic parsers, judged by exact match between the top-scoring logical form from the parser and the manually-produced logical form. Exact-match tests are overly-strict, in the sense that the system may be judged incorrect even when the logical form that is produced is logically equivalent to the correct logical form. However, by inspection such cases appear to be very rare in our data, and the exact-match criterion is often used in other semantic parsing experimental settings. The semantic parsers produced by MATCHER+LEXTENDER and the other alignment techniques significantly outperform the baseline semantic parser learned by UBL, which achieves an overall F1 of 0.21 on these questions in the 70/30 split of the data, and an F1 of 0 in the cross-domain experiment. Purely-supervised approaches to this data are severely limited, since they have almost no chance of correctly parsing questions that refer to logical symbols that never appeared during training. However, MATCHER and LEXTENDER combine with UBL to produce an effective semantic parser. The best semantic parser we tested, which was produced by UBL, MATCHER, and LEXTENDER with 9 matches per Freebase relation, had a precision of 0.67 and a recall of 0.59 on the 70/30 split experiment. The difference in alignment performance between MATCHER, Pattern, and Extractions carries over to semantic parsing. MATCHER drops in F1 with more matches as additional matches tend to be low-quality and low-probability, whereas Pat430 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0 5 10 15 20 25 30 F1 for exact match of logical forms Number of Matches per Freebase Relation Semantic Parsing (70/30 Split) Matcher Pattern Extractions Frequency Baseline 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0 5 10 15 20 25 30 F1 for exact match of logical forms Number of Matches per Freebase Relation Semantic Parsing (Cross-Domain) Matcher Extractions Pattern Frequency Figure 4: Semantic parsers produced by UBL+MATCHER+LEXTENDER outperform the purelysupervised baseline semantic parser on a random 70/30 split of the data (left) by as much as 0.42 in F1. In the case of this split and in the case of a cross-domain experiment (right), UBL+MATCHER+LEXTENDER outperforms UBL+Pattern+LEXTENDER by as much as 0.06 in F1. tern and Extractions keep improving as more lowprobability alignments are added. Interestingly, the Extractions model begins to overtake the Pattern model in F1 at higher numbers of matches, and all three models trend toward convergence in F1 with increasing numbers of matches. Nevertheless, MATCHER clearly improves over both, and reaches a higher F1 than either Pattern or Extractions using a small number of matches, which corresponds to a smaller lexicon and a leaner model. To place these results in context, many different semantic parsers for databases like GeoQuery and ATIS (including parsers produced by UBL) have achieved F1 scores of 0.85 and higher. However, in all such tests, the test questions refer to logical constants that also appeared during training, allowing supervised techniques for learning semantic parsers to achieve strong accuracy. As we have argued, Freebase is large enough that is difficult to produce enough labeled training data to cover all of its logical constants. An unsupervised semantic parser for GeoQuery has achieved an F1 score of 0.66 (Goldwasser et al., 2011), impressive in its own right and slightly better than our F1 score. However, this parser was given questions which it knew a priori to contain words that refer to the logical constants in the database. Our MATCHER and LEXTENDER systems address a different challenge: how to learn a semantic parser for Freebase given the Web and a set of initial labeled questions. 6 Conclusion Scaling semantic parsing to large databases requires an engineering effort to handle large datasets, but also novel algorithms to extend semantic parsing models to testing examples that look significantly different from labeled training data. The MATCHER and LEXTENDER algorithms represent an initial investigation into such techniques, with early results indicating that semantic parsers can handle Freebase questions on a large variety of domains with an F1 of 0.63. We hope that our techniques and datasets will spur further research into this area. In particular, more research is needed to handle more complex matches between database and textual relations, and to handle more complex natural language queries. As mentioned in section 3.1, words like “actress” cannot be addressed by the current methodology, since MATCHER assumes that a word maps to a single Freebase relation, but the closest Freebase equivalent to the meaning of “actress” involves the two relations film actor and gender. Another limitation is that our current methodology focuses on finding matches for nouns and verbs. Other important limitations of the current methodology include: • the assumption that function words have no domain-specific meaning, which prepositions in particular can violate; • low accuracy when there are few relevant results among the set of extracted relations; • and the restriction to a single database (Freebase) for finding answers. While significant challenges remain, the reduction of large-scale semantic parsing to a combination of schema matching and supervised learning offers a new path toward building high-coverage semantic parsers. 431 References Yoav Artzi and Luke Zettlemoyer. 2011. Bootstrapping Semantic Parsers from Conversations. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). M. Banko, M. J. Cafarella, S. Soderland, M. Broadhead, and O. Etzioni. 2007. Open information extraction from the web. In IJCAI. Jacob Berlin and Amihai Motro. 2006. Database schema matching using machine learning with feature selection. In Advanced Information Systems Engineering. Springer. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: A collaboratively created graph database for structuring human knowledge. In Proceedings of the International Conference on Management of Data (SIGMOD), pages 1247–1250. Qingqing Cai and Alexander Yates. 2013. Semantic Parsing Freebase: Towards Open-Domain Semantic Parsing. In Second Joint Conference on Lexical and Computational Semantics. Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, Estevam R. Hruschka Jr., and Tom M. Mitchell. 2010. Toward an Architecture for NeverEnding Language Learning. In Proceedings of the Twenty-Fourth Conference on Artificial Intelligence (AAAI 2010). David L. Chen, Joohyun Kim, and Raymond J. Mooney. 2010. Training a Multilingual Sportscaster: Using Perceptual Context to Learn Language. Journal of Artificial Intelligence Research, 37:397–435. Stephen Clark and James R. Curran. 2007. Widecoverage efficient statistical parsing with ccg and log-linear models. Computational Linguistics, 33(4):493–552. J. Clarke, D. Goldwasser, M. Chang, and D. Roth. 2010. Driving semantic parsing from the world’s response. In Computational Natural Language Learning (CoNLL). R. Dhamanka, Y. Lee, A. Doan, A. Halevy, and P. Domingos. 2004. iMAP: Discovering Complex Semantic Matches between Database Schemas. In SIGMOD. A. Doan, J. Madhavan, P. Domingos, and A. Halevy. 2004. Ontology Matching: A Machine Learning Approach. In S. Staab and R. Studer, editors, Handbook on Ontologies in Information Systems, pages 397–416. Springer-Verlag. M. Ehrig, P. Haase, N. Stojanovic, and M. Hefke. 2004. Similarity for ontologies-a comprehensive framework. In Workshop Enterprise Modelling and Ontology: Ingredients for Interoperability, PAKM. Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying Relations for Open Information Extraction. In Conference on Empirical Methods in Natural Language Processing (EMNLP). Fausto Giunchiglia, Pavel Shvaiko, and Mikalai Yatskevich. 2005. Semantic schema matching. On the Move to Meaningful Internet Systems 2005: CoopIS, DOA, and ODBASE, pages 347–365. D. Goldwasser, R. Reichart, J. Clarke, and D. Roth. 2011. Confidence driven unsupervised semantic parsing. In Association for Computational Linguistics (ACL). Johannes Hoffart, Fabian M. Suchanek, Klaus Berberich, and Gerhard Weikum. 2013. YAGO2: A Spatially and Temporally Enhanced Knowledge Base from Wikipedia. Artificial Intelligence, 194:28–61, January. Jayant Krishnamurthy and Tom Mitchell. 2012. Weakly Supervised Training of Semantic Parsers. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). Tom Kwiatkowski, Luke Zettlemoyer, Sharon Goldwater, and Mark Steedman. 2010. Inducing Probabilistic CCG Grammars from Logical Form with Higherorder Unification. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). P. Liang, M. I. Jordan, and D. Klein. 2009. Learning semantic correspondences with less supervision. In Association for Computational Linguistics and International Joint Conference on Natural Language Processing (ACL-IJCNLP). P. Liang, M. I. Jordan, and D. Klein. 2011. Learning dependency-based compositional semantics. In Association for Computational Linguistics (ACL). D. Lin and P. Pantel. 2001. DIRT – Discovery of Inference Rules from Text. In KDD. Henrik Nottelmann and Umberto Straccia. 2007. Information retrieval and machine learning for probabilistic schema matching. Information processing & management, 43(3):552–576. Hoifung Poon and Pedro Domingos. 2009. Unsupervised semantic parsing. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, EMNLP ’09, pages 1–10, Stroudsburg, PA, USA. Association for Computational Linguistics. E. Rahm and P.A. Bernstein. 2001. A survey of approaches to automatic schema matching. The VLDB Journal, 10:334–350. P. D. Turney. 2001. Mining the Web for Synonyms: PMI-IR versus LSA on TOEFL. In Procs. of the Twelfth European Conference on Machine Learning (ECML), pages 491–502, Freiburg, Germany. 432 M. Wick, K. Rohanimanesh, A. McCallum, and A.H. Doan. 2008a. A discriminative approach to ontology mapping. In International Workshop on New Trends in Information Integration (NTII) at VLDB WS. M.L. Wick, K. Rohanimanesh, K. Schultz, and A. McCallum. 2008b. A unified approach for schema matching, coreference and canonicalization. In Proceeding of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining. Mohamed Yahya, Klaus Berberich, Shady Elbassuoni, Maya Ramanath, Volker Tresp, and Gerhard Weikum. 2012. Natural Language Questions for the Web of Data. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). Alexander Yates and Oren Etzioni. 2009. Unsupervised methods for determining object and relation synonyms on the web. Journal of Artificial Intelligence Research (JAIR), 34:255–296, March. 433
2013
42
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 434–443, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Fast and Accurate Shift-Reduce Constituent Parsing Muhua Zhu†, Yue Zhang‡, Wenliang Chen∗, Min Zhang∗and Jingbo Zhu† †Natural Language Processing Lab., Northeastern University, China ‡Singapore University of Technology and Design, Singapore ∗Soochow University, China and Institute for Infocomm Research, Singapore [email protected] yue [email protected] [email protected] [email protected] [email protected] Abstract Shift-reduce dependency parsers give comparable accuracies to their chartbased counterparts, yet the best shiftreduce constituent parsers still lag behind the state-of-the-art. One important reason is the existence of unary nodes in phrase structure trees, which leads to different numbers of shift-reduce actions between different outputs for the same input. This turns out to have a large empirical impact on the framework of global training and beam search. We propose a simple yet effective extension to the shift-reduce process, which eliminates size differences between action sequences in beam-search. Our parser gives comparable accuracies to the state-of-the-art chart parsers. With linear run-time complexity, our parser is over an order of magnitude faster than the fastest chart parser. 1 Introduction Transition-based parsers employ a set of shiftreduce actions and perform parsing using a sequence of state transitions. The pioneering models rely on a classifier to make local decisions, and search greedily for a transition sequence to build a parse tree. Greedy, classifier-based parsers have been developed for both dependency grammars (Yamada and Matsumoto, 2003; Nivre et al., 2006) and phrase-structure grammars (Sagae and Lavie, 2005). With linear run-time complexity, they were commonly regarded as a faster but less accurate alternative to graph-based chart parsers (Collins, 1997; Charniak, 2000; McDonald et al., 2005). Various methods have been proposed to address the disadvantages of greedy local parsing, among which a framework of beam-search and global discriminative training have been shown effective for dependency parsing (Zhang and Clark, 2008; Huang and Sagae, 2010). While beam-search reduces error propagation compared with greedy search, a discriminative model that is globally optimized for whole sequences of transition actions can avoid local score biases (Lafferty et al., 2001). This framework preserves the most important advantage of greedy local parsers, including linear run-time complexity and the freedom to define arbitrary features. With the use of rich non-local features, transition-based dependency parsers achieve state-of-the-art accuracies that are comparable to the best-graph-based parsers (Zhang and Nivre, 2011; Bohnet and Nivre, 2012). In addition, processing tens of sentences per second (Zhang and Nivre, 2011), these transition-based parsers can be a favorable choice for dependency parsing. The above global-learning and beam-search framework can be applied to transition-based phrase-structure (constituent) parsing also (Zhang and Clark, 2009), maintaining all the aforementioned benefits. However, the effects were not as significant as for transition-based dependency parsing. The best reported accuracies of transition-based constituent parsers still lag behind the state-of-the-art (Sagae and Lavie, 2006; Zhang and Clark, 2009). One difference between phrasestructure parsing and dependency parsing is that for the former, parse trees with different numbers of unary rules require different numbers of actions to build. Hence the scoring model needs to disambiguate between transitions sequences with different sizes. For the same sentence, the largest output can take twice as many as actions to build as the 434 smallest one. This turns out to have a significant empirical impact on parsing with beam-search. We propose an extension to the shift-reduce process to address this problem, which gives significant improvements to the parsing accuracies. Our method is conceptually simple, requiring only one additional transition action to eliminate size differences between different candidate outputs. On standard evaluations using both the Penn Treebank and the Penn Chinese Treebank, our parser gave higher accuracies than the Berkeley parser (Petrov and Klein, 2007), a state-of-the-art chart parser. In addition, our parser runs with over 89 sentences per second, which is 14 times faster than the Berkeley parser, and is the fastest that we are aware of for phrase-structure parsing. An open source release of our parser (version 0.6) is freely available on the Web. 1 In addition to the above contributions, we apply a variety of semi-supervised learning techniques to our transition-based parser. These techniques have been shown useful to improve chart-based parsing (Koo et al., 2008; Chen et al., 2012), but little work has been done for transition-based parsers. We therefore fill a gap in the literature by reporting empirical results using these methods. Experimental results show that semi-supervised methods give a further improvement of 0.9% in F-score on the English data and 2.4% on the Chinese data. Our Chinese results are the best that we are aware of on the standard CTB data. 2 Baseline parser We adopt the parser of Zhang and Clark (2009) for our baseline, which is based on the shift-reduce process of Sagae and Lavie (2005), and employs global perceptron training and beam search. 2.1 Vanilla Shift-Reduce Shift-reduce parsing is based on a left-to-right scan of the input sentence. At each step, a transition action is applied to consume an input word or construct a new phrase-structure. A stack is used to maintain partially constructed phrasestructures, while the input words are stored in a buffer. The set of transition actions are • SHIFT: pop the front word from the buffer, and push it onto the stack. 1http://sourceforge.net/projects/zpar/ Axioms [φ, 0, false,0] Goal [S, n, true, C] Inference Rules: [S, i, false, c] SHIFT [S|w, i + 1, false, c + cs] [S|s1s0, i, false, c] REDUCE-L/R-X [S|X, i, false, c + cr] [S|s0, i, false, c] UNARY-X [S|X, i, false, c + cu] [S, n, false, c] FINISH [S, n, true, c + cf] Figure 1: Deduction system of the baseline shiftreduce parsing process. • REDUCE-L/R-X: pop the top two constituents off the stack, combine them into a new constituent with label X, and push the new constituent onto the stack. • UNARY-X: pop the top constituent off the stack, raise it to a new constituent with label X, and push the new constituent onto the stack. • FINISH: pop the root node off the stack and ends parsing. The deduction system for the process is shown in Figure 1, where the item is formed as ⟨stack, buffer front index, completion mark, score⟩, and cs, cr, and cu represent the incremental score of the SHIFT, REDUCE, and UNARY parsing steps, respectively; these scores are calculated according to the context features of the parser state item. n is the number of words in the input. 2.2 Global Discriminative Training and Beam-Search For a given input sentence, the initial state has an empty stack and a buffer that contains all the input words. An agenda is used to keep the k best state items at each step. At initialization, the agenda contains only the initial state. At each step, every state item in the agenda is popped and expanded by applying a valid transition action, and the top k from the newly constructed state items are put back onto the agenda. The process repeats until the agenda is empty, and the best completed state item (recorded as candidate output) is taken for 435 Description Templates unigrams s0tc, s0wc, s1tc, s1wc, s2tc s2wc, s3tc, s3wc, q0wt, q1wt q2wt, q3wt, s0lwc, s0rwc s0uwc, s1lwc, s1rwc, s1uwc bigrams s0ws1w, s0ws1c, s0cs1w, s0cs1c, s0wq0w, s0wq0t, s0cq0w, s0cq0t, q0wq1w, q0wq1t, q0tq1w, q0tq1t, s1wq0w, s1wq0t, s1cq0w, s1cq0t trigrams s0cs1cs2c, s0ws1cs2c, s0cs1wq0t s0cs1cs2w, s0cs1cq0t, s0ws1cq0t s0cs1wq0t, s0cs1cq0w Table 1: A summary of baseline feature templates, where si represents the ith item on the stack S and qi denotes the ith item in the queue Q. w refers to the head lexicon, t refers to the head POS, and c refers to the constituent label. the output. The score of a state item is the total score of the transition actions that have been applied to build the item: C(α) = N X i=1 Φ(ai) · ⃗θ Here Φ(ai) represents the feature vector for the ith action ai in state item α. It is computed by applying the feature templates in Table 1 to the context of α. N is the total number of actions in α. The model parameter ⃗θ is trained with the averaged perceptron algorithm, applied to state items (sequence of actions) globally. We apply the early update strategy (Collins and Roark, 2004), stopping parsing for parameter updates when the goldstandard state item falls off the agenda. 2.3 Baseline Features Our baseline features are adopted from Zhang and Clark (2009), and are shown in Table 1 Here si represents the ith item on the top of the stack S and qi denotes the ith item in the front end of the queue Q. The symbol w denotes the lexical head of an item; the symbol c denotes the constituent label of an item; the symbol t is the POS of a lexical head. These features are adapted from Zhang and Clark (2009). We remove Chinese specific features and make the baseline parser languageindependent. 3 Improved hypotheses comparison Unlike dependency parsing, constituent parse trees for the same sentence can have different numbers of nodes, mainly due to the existence of unary nodes. As a result, completed state NP NN address NNS issues VP VB address NP NNS issues Figure 2: Example parse trees of the same sentence with different numbers of actions. items for the same sentence can have different numbers of unary actions. Take the phrase “address issues” for example, two possible parses are shown in Figure 2 (a) and (b), respectively. The first parse corresponds to the action sequence [SHIFT, SHIFT, REDUCE-R-NP, FINISH], while the second parse corresponds to the action sequence [SHIFT, SHIFT, UNARY-NP, REDUCE-LVP, FINISH], which consists of one more action than the first case. In practice, variances between state items can be much larger than the chosen example. In the extreme case where a state item does not contain any unary action, the number of actions is 2n, where n is the number of words in the sentence. On the other hand, if the maximum number of consequent unary actions is 2 (Sagae and Lavie, 2005; Zhang and Clark, 2009), then the maximum number of actions a state item can have is 4n. The significant variance in the number of actions N can have an impact on the linear separability of state items, for which the feature vectors are PN i=1 Φ (ai). This turns out to have a significant empirical influence on perceptron training with early-update, where the training of the model interacts with search (Daume III, 2006). One way of improving the comparability of state items is to reduce the differences in their sizes, and we use a padding method to achieve this. The idea is to extend the set of actions by adding an IDLE action, so that completed state items can be further expanded using the IDLE action. The action does not change the state itself, but simply adds to the number of actions in the sequence. A feature vector is extracted for the IDLE action according to the final state context, in the same way as other actions. Using the IDLE action, the transition sequence for the two parses in Figure 2 can be [SHIFT, SHIFT, REDUCENP, FINISH, IDLE] and [SHIFT, SHIFT, UNARYNP, REDUCE-L-VP, FINISH], respectively. Their 436 Axioms [φ, 0, false, 0, 0] Goal [S, n, true, m : 2n ≤m ≤4n, C] Inference Rules: [S, i, false, k,c] SHIFT [S|w, i + 1, false, k + 1, c + cs] [S|s1s0, i, false, k, c] REDUCE-L/R-X [S|X, i, false, k + 1, c + cr] [S|s0, i, false, k, c] UNARY-X [S|X, i, false, k + 1, c + cu] [S, n, false, k, c] FINISH [S, n, true, k + 1, c + cf] [S, n, true, k, c] IDLE [S, n, true, k + 1, c + ci] Figure 3: Deductive system of the extended transition system. corresponding feature vectors have about the same sizes, and are more linearly separable. Note that there can be more than one action that are padded to a sequence of actions, and the number of IDLE actions depends on the size difference between the current action sequence and the largest action sequence without IDLE actions. Given this extension, the deduction system is shown in Figure 3. We add the number of actions k to an item. The initial item (Axioms) has k = 0, while the goal item has 2n ≤k ≤4n. Given this process, beam-search decoding can be made simpler than that of Zhang and Clark (2009). While they used a candidate output to record the best completed state item, and finish decoding when the agenda contains no more items, we can simply finish decoding when all items in the agenda are completed, and output the best state item in the agenda. With this new transition process, we experimented with several extended features,and found that the templates in Table 2 are useful to improve the accuracies further. Here sill denotes the left child of si’s left child. Other notations can be explained in a similar way. 4 Semi-supervised Parsing with Large Data This section discusses how to extract information from unlabeled data or auto-parsed data to further improve shift-reduce parsing accuracies. We consider three types of information, including s0llwc, s0lrwc, s0luwc s0rlwc, s0rrwc, s0ruwc s0ulwc, s0urwc, s0uuwc s1llwc, s1lrwc, s1luwc s1rlwc, s1rrwc, s1ruwc Table 2: New features for the extended parser. paradigmatic relations, dependency relations, and structural relations. These relations are captured by word clustering, lexical dependencies, and a dependency language model, respectively. Based on the information, we propose a set of novel features specifically designed for shift-reduce constituent parsing. 4.1 Paradigmatic Relations: Word Clustering Word clusters are regarded as lexical intermediaries for dependency parsing (Koo et al., 2008) and POS tagging (Sun and Uszkoreit, 2012). We employ the Brown clustering algorithm (Liang, 2005) on unannotated data (word segmentation is performed if necessary). In the initial state of clustering, each word in the input corpus is regarded as a cluster, then the algorithm repeatedly merges pairs of clusters that cause the least decrease in the likelihood of the input corpus. The clustering results are a binary tree with words appearing as leaves. Each cluster is represented as a bit-string from the root to the tree node that represents the cluster. We define a function CLU(w) to return the cluster ID (a bit string) of an input word w. 4.2 Dependency Relations: Lexical Dependencies Lexical dependencies represent linguistic relations between words: whether a word modifies another word. The idea of exploiting lexical dependency information from auto-parsed data has been explored before for dependency parsing (Chen et al., 2009) and constituent parsing (Zhu et al., 2012). To extract lexical dependencies, we first run the baseline parser on unlabeled data. To simplify the extraction process, we can convert auto-parsed constituency trees into dependency trees by using Penn2Malt. 2 From the dependency trees, we extract bigram lexical dependencies ⟨w1, w2, L/R⟩ where the symbol L (R) means that w1 (w2) is the head of w2 (w1). We also extract trigram lexical 2http://w3.msi.vxu.se/∼nivre/research/Penn2Malt.html 437 dependencies ⟨w1, w2, w3, L/R⟩, where L means that w1 is the head of w2 and w3, meanwhile w2 and w3 are required to be siblings. Following the strategy of Chen et al. (2009), we assign categories to bigram and trigram items separately according to their frequency counts. Specifically, top-10% most frequent items are assigned to the category of High Frequency (HF); otherwise if an item is among top 20%, we assign it to the category of Middle Frequency (MF); otherwise the category of Low Frequency (LF). Hereafter, we refer to the bigram and trigram lexical dependency lists as BLD and TLD, respectively. 4.3 Structural Relations: Dependency Language Model The dependency language model is proposed by Shen et al. (2008) and is used as additional information for graph-based dependency parsing in Chen et al. (2012). Formally, given a dependency tree y of an input sentence x, we can denote by H(y) the set of words that have at least one dependent. For each xh ∈H(y), we have a corresponding dependency structure Dh = (xLk, . . . xL1, xh, xR1, . . . , xRm). The probability P(Dh) is defined to be P(Dh) = PL(Dh) × PR(Dh) where PL(Dh) can be in turn defined as: PL(Dh) ≈ P(xL1|xh) ×P(xL2|xL1, xh) × . . . ×P(xLk|xLk−1, . . . , xLk−N+1, xh) PR(Dh) can be defined in a similar way. We build dependency language models on autoparsed data. Again, we convert constituency trees into dependency trees for the purpose of simplicity. From the dependency trees, we build a bigram and a trigram language model, which are denoted by BLM and TLM, respectively. The following are the templates of the records of the dependency language models. (1) ⟨xLi, xh, P(xLi|xh)⟩ (2) ⟨xRi, xh, P(xRi|xh)⟩ (3) ⟨xLi, xLi−1, xh, P(xLi|xLi−1, xh)⟩ (4) ⟨xRi, xRi−1, xh, P(xRi|xRi−1, xh)⟩ Here the templates (1) and (2) belong to BLM and the templates (3) and (4) belong to TLM. To Stat Train Dev Test Unlabeled EN # sent 39.8k 1.7k 2.4k 3,139.1k # word 950.0k 40.1k 56.7k 76,041.4k CH # sent 18.1k 350 348 11,810.7k # word 493.8k 8.0k 6.8k 269,057.2k Table 4: Statistics on sentence and word numbers of the experimental data. use the dependency language models, we employ a map function Φ(r) to assign a category to each record r according to its probability, as in Chen et al. (2012). The following is the map function. Φ(r) =      HP if P(r) ∈top−10% MP else if P(r) ∈top−30% LP otherwise 4.4 Semi-supervised Features We design a set of features based on the information extracted from auto-parsed data or unannotated data. The features are summarized in Table 3. Here CLU returns a cluster ID for a word. The functions BLDl/r(·), TLDl/r(·), BLMl/r(·), and TLMl/r(·) check whether a given word combination can be found in the corresponding lists. For example, BLDl(s1w, s0w) returns a category tag (HF, MF, or LF) if ⟨s1w, s0w, L⟩exits in the list BLD, else it returns NONE. 5 Experiments 5.1 Set-up Labeled English data employed in this paper were derived from the Wall Street Journal (WSJ) corpus of the Penn Treebank (Marcus et al., 1993). We used sections 2-21 as labeled training data, section 24 for system development, and section 23 for final performance evaluation. For labeled Chinese data, we used the version 5.1 of the Penn Chinese Treebank (CTB) (Xue et al., 2005). Articles 001270 and 440-1151 were used for training, articles 301-325 were used as development data, and articles 271-300 were used for evaluation. For both English and Chinese data, we used tenfold jackknifing (Collins, 2000) to automatically assign POS tags to the training data. We found that this simple technique could achieve an improvement of 0.4% on English and an improvement of 2.0% on Chinese. For English POS tagging, we adopted SVMTool, 3 and for Chinese POS tagging 3http://www.lsi.upc.edu/∼nlp/SVMTool/ 438 Word Cluster Features CLU(s1w) CLU(s0w) CLU(q0w) CLU(s1w)s1t CLU(s0w)s0t CLU(q0w)q0w Lexical Dependency Features BLDl(s1w, s0w) BLDl(s1w, s0w)◦s1t◦s0t BLDr(s1w, s0w) BLDr(s1w, s0w)◦s1t◦s0t BLDl(s1w, q0w)◦s1t◦q0t BLDl(s1w, q0w) BLDr(s1w, q0w) BLDr(s1w, q0w)◦s1t◦q0t BLDl(s0w, q0w) BLDl(s0w, q0w)◦s0t◦q0t BLDr(s0w, q0w)◦s0t◦q0t BLDr(s0w, q0w) TLDl(s1w, s1rdw, s0w) TLDl(s1w, s1rdw, s0w)◦s1t◦s0t TLDr(s1w, s0ldw, s0w) TLDr(s1w, s0ldw, s0w)◦s1t◦s0t TLDl(s0w, s0rdw, q0w)◦s0t◦q0t TLDl(s0w, s0rdw, q0w) TLDr(s0w, NONE, q0w) TLDr(s0w, NONE, q0w)◦s0t◦q0t Dependency Language Model Features BLMl(s1w, s0w) BLMl(s1w, s0w)◦s1t◦s0t BLMr(s1w, s0w) BLMr(s1w, s0w)◦s1t◦s0t BLMl(s0w, q0w) BLMl(s0w, q0w)◦s0t◦q0t BLMr(s0w, q0w)◦s0t◦q0t BLMr(s0w, q0w) TLMl(s1w, s1rdw, s0w) TLMl(s1w, s1rdw, s0w)◦s1t◦s0t TLMr(s1w, s0ldw, s0w) TLMr(s1w, s0ldw, s0w)◦s1t◦s0t Table 3: Semi-supervised features designed on the base of word clusters, lexical dependencies, and dependency language models. Here the symbol si denotes a stack item, qi denotes a queue item, w represents a word, and t represents a POS tag. Lan. System LR LP F1 ENG Baseline 88.4 88.7 88.6 +padding 88.8 89.5 89.1 +features 89.0 89.7 89.3 CHN Baseline 85.6 86.3 86.0 +padding 85.5 87.2 86.4 +features 85.5 87.6 86.5 Table 5: Experimental results on the English and Chinese development sets with the padding technique and new supervised features added incrementally. we employed the Stanford POS tagger. 4 We took the WSJ articles from the TIPSTER corpus (LDC93T3A) as unlabeled English data. In addition, we removed from the unlabeled English data the sentences that appear in the WSJ corpus of the Penn Treebank. For unlabeled Chinese data, we used Chinese Gigaword (LDC2003T09), on which we conducted Chinese word segmentation by using a CRF-based segmenter. Table 4 summarizes data statistics on sentence and word numbers of the data sets listed above. We used EVALB to evaluate parser performances, including labeled precision (LP), labeled recall (LR), and bracketing F1. 5 For significance tests, we employed the randomized permutationbased tool provided by Daniel Bikel. 6 In both training and decoding, we set the beam size to 16, which achieves a good tradeoff between efficiency and accuracy. The optimal iteration number of perceptron learning is determined 4http://nlp.stanford.edu/software/tagger.shtml 5http://nlp.cs.nyu.edu/evalb 6http://www.cis.upenn.edu/∼dbikel/software.html#comparator Lan. Features LR LP F1 ENG +word cluster 89.3 90.0 89.7 +lexical dependencies 89.7 90.3 90.0 +dependency LM 90.0 90.6 90.3 CHN +word cluster 85.7 87.5 86.6 +lexical dependencies 87.2 88.6 87.9 +dependency LM 87.2 88.7 88.0 Table 6: Experimental results on the English and Chinese development sets with different types of semi-supervised features added incrementally to the extended parser. on the development sets. For word clustering, we set the cluster number to 50 for both the English and Chinese experiments. 5.2 Results on Development Sets Table 5 reports the results of the extended parser (baseline + padding + supervised features) on the English and Chinese development sets. We integrated the padding method into the baseline parser, based on which we further incorporated the supervised features in Table 2. From the results we find that the padding method improves the parser accuracies by 0.5% and 0.4% on English and Chinese, respectively. Incorporating the supervised features in Table 2 gives further improvements of 0.2% on English and 0.1% on Chinese. Based on the extended parser, we experimented different types of semi-supervised features by adding the features incrementally. The results are shown in Table 6. By comparing the results in Table 5 and the results in Table 6 we can see that the semi-supervised features achieve an overall improvement of 1.0% on the English data and an im439 Type Parser LR LP F1 SI Ratnaparkhi (1997) 86.3 87.5 86.9 Collins (1999) 88.1 88.3 88.2 Charniak (2000) 89.5 89.9 89.5 Sagae & Lavie (2005)∗ 86.1 86.0 86.0 Sagae & Lavie (2006)∗ 87.8 88.1 87.9 Baseline 90.0 89.9 89.9 Petrov & Klein (2007) 90.1 90.2 90.1 Baseline+Padding 90.2 90.7 90.4 Carreras et al. (2008) 90.7 91.4 91.1 RE Charniak & Johnson (2005) 91.2 91.8 91.5 Huang (2008) 92.2 91.2 91.7 SE Zhu et al. (2012)∗ 90.4 90.5 90.4 Baseline+Padding+Semi 91.1 91.5 91.3 Huang & Harper (2009) 91.1 91.6 91.3 Huang et al. (2010)† 91.4 91.8 91.6 McClosky et al. (2006) 92.1 92.5 92.3 Table 7: Comparison of our parsers and related work on the English test set. ∗Shift-reduce parsers. † The results of self-training with a single latent annotation grammar. Type Parser LR LP F1 SI Charniak (2000)∗ 79.6 82.1 80.8 Bikel (2004)† 79.3 82.0 80.6 Baseline 82.1 83.1 82.6 Baseline+Padding 82.1 84.3 83.2 Petrov & Klein (2007) 81.9 84.8 83.3 RE Charniak & Johnson (2005)∗ 80.8 83.8 82.3 SE Zhu et al. (2012) 80.6 81.9 81.2 Baseline+Padding+Semi 84.4 86.8 85.6 Table 8: Comparison of our parsers and related work on the test set of CTB5.1.∗Huang (2009) adapted the parsers to Chinese parsing on CTB5.1. † We run the parser on CTB5.1 to get the results. provement of 1.5% on the Chinese data. 5.3 Final Results Here we report the final results on the English and Chinese test sets. We compared the final results with a large body of related work. We grouped the parsers into three categories: single parsers (SI), discriminative reranking parsers (RE), and semisupervised parsers (SE). Table 7 shows the comparative results on the English test set and Table 8 reports the comparison on the Chinese test set. From the results we can see that our extended parser (baseline + padding + supervised features) outperforms the Berkeley parser by 0.3% on English, and is comparable with the Berkeley parser on Chinese (−0.1% less). Here +padding means the padding technique and the features in Table 2. After integrating semi-supervised features, the parsing accuracy on English is improved to 91.3%. We note that the performance is on the same level Parser #Sent/Second Ratnaparkhi (1997) Unk Collins (1999) 3.5 Charniak (2000) 5.7 Sagae & Lavie (2005)∗ 3.7‡ Sagae & Lavie (2006)† 2.2‡ Petrov & Klein (2007) 6.2 Carreras et al. (2008) Unk This Paper Baseline 100.7 Baseline+Padding 89.5 Baseline+Padding+Semi 46.8 Table 9: Comparison of running times on the English test set, where the time for loading models is excluded. ∗The results of SVM-based shiftreduce parsing with greedy search. † The results of MaxEnt-based shift-reduce parser with best-first search. ‡ Times reported by authors running on different hardware. as the performance of self-trained parsers, except for McClosky et al. (2006), which is based on the combination of reranking and self-training. On Chinese, the final parsing accuracy is 85.6%. To our knowledge, this is by far the best reported performance on this data set. The padding technique, supervised features, and semi-supervised features achieve an overall improvement of 1.4% over the baseline on English, which is significant on the level of p < 10−5. The overall improvement on Chinese is 3.0%, which is also significant on the level of p < 10−5. 5.4 Comparison of Running Time We also compared the running times of our parsers with the related single parsers. We ran timing tests on an Intel 2.3GHz processor with 8GB memory. The comparison is shown in Table 9. From the table, we can see that incorporating semisupervised features decreases parsing speed, but the semi-supervised parser still has the advantage of efficiency over other parsers. Specifically, the semi-supervised parser is 7 times faster than the Berkeley parser. Note that Sagae & Lavie (2005) and Sagae & Lavie (2006) are also shift-reduce parsers, and their running times were evaluated on different hardwares. In practice, the running times of the shift-reduce parsers should be much shorter than the reported times in the table. 5.5 Error Analysis We conducted error analysis for the three systems: the baseline parser, the extended parser with 440 86 88 90 92 94 1 2 3 4 5 6 7 8 F Score Span Length Baseline Extended Semi-supervised Figure 5: Comparison of parsing accuracies of the baseline, extended parser, and semi-supervised parsers on spans of different lengths. the padding technique, and the semi-supervised parser, focusing on the English test set. The analysis was performed in four dimensions: parsing accuracies on different phrase types, on constituents of different span lengths, on different sentence lengths, and on sentences with different numbers of unknown words. 5.5.1 Different Phrase Types Table 10 shows the parsing accuracies of the baseline, extended parser, and semi-supervised parser on different phrase types. Here we only consider the nine most frequent phrase types in the English test set. In the table, the phrase types are ordered from left to right in the descending order of their frequencies. We also show the improvements of the semi-supervised parser over the baseline parser (the last row in the table). As the results show, the extended parser achieves improvements on most of the phrase types with two exceptions: Preposition Prase (PP) and Quantifier Phrase (QP). Semisupervised features further improve parsing accuracies over the extended parser (QP is an exception). From the last row, we can see that improvements of the semi-supervised parser over the baseline on VP, S, SBAR, ADVP, and ADJP are above the average improvement (1.4%). 5.5.2 Different Span Lengths Figure 5 shows a comparison of the three parsers on spans of different lengths. Here we consider span lengths up to 8. As the results show, both the padding extension and semi-supervised features are more helpful on relatively large spans: the performance gaps between the three parsers are enlarged with increasing span lengths. 82 84 86 88 90 92 94 10 20 30 40 50 60 70 F Score Sentence Length Baseline Extended Semi-supervised Figure 6: Comparison of parsing accuracies of the baseline, extended parser, and semi-supervised parser on sentences of different lengths. 5.5.3 Different Sentence Lengths Figure 6 shows a comparison of parsing accuracies of the three parsers on sentences of different lengths. Each number on the horizontal axis represents the sentences whose lengths are between the number and its previous number. For example, the number 30 refers to the sentences whose lengths are between 20 and 30. From the results we can see that semi-supervised features improve parsing accuracy on both short and long sentences. The points at 70 are exceptions. In fact, sentences with lengths between 60 and 70 have only 8 instances, and the statistics on such a small number of sentences are not reliable. 5.5.4 Different Numbers of Unknown Words Figure 4 shows a comparison of parsing accuracies of the baseline, extended parser, and semisupervised parser on sentences with different numbers of unknown words. As the results show, the padding method is not very helpful on sentences with large numbers of unknown words, while semi-supervised features help significantly on this aspect. This conforms to the intuition that semi-supervised methods reduce data sparseness and improve the performance on unknown words. 6 Conclusion In this paper, we addressed the problem of different action-sequence lengths for shift-reduce phrase-structure parsing, and designed a set of novel non-local features to further improve parsing. The resulting supervised parser outperforms the Berkeley parser, a state-of-the-art chart parser, in both accuracies and speeds. In addition, we incorporated a set of semi-supervised features. The 441 System NP VP S PP SBAR ADVP ADJP WHNP QP Baseline 91.9 90.1 89.8 88.1 85.7 84.6 72.1 94.8 89.3 Extended 92.1 90.7 90.2 87.9 86.6 84.5 73.6 95.5 88.6 Semi-supervised 93.2 92.0 91.5 89.3 88.2 86.8 75.1 95.7 89.1 Improvements +1.3 +1.9 +1.7 +1.2 +2.5 +2.2 +3.0 +0.9 -0.2 Table 10: Comparison of parsing accuracies of the baseline, extended parser, and semi-supervised parsers on different phrase types. 0 1 2 3 4 5 6 7 70 80 90 100 91.98 89.73 88.87 87.96 85.95 83.7 81.42 82.74 92.17 90.53 89.51 87.99 88.66 87.33 83.89 80.49 92.88 91.26 90.43 89.88 90.35 86.39 90.68 90.24 F-score (%) Baseline Extended Semi-supervised Figure 4: Comparison of parsing accuracies of the baseline, extended parser, and semi-supervised parser on sentences of different unknown words. final parser reaches an accuracy of 91.3% on English and 85.6% on Chinese, by far the best reported accuracies on the CTB data. Acknowledgements We thank the anonymous reviewers for their valuable comments. Yue Zhang and Muhua Zhu were supported partially by SRG-ISTD-2012-038 from Singapore University of Technology and Design. Muhua Zhu and Jingbo Zhu were funded in part by the National Science Foundation of China (61073140; 61272376), Specialized Research Fund for the Doctoral Program of Higher Education (20100042110031), and the Fundamental Research Funds for the Central Universities (N100204002). Wenliang Chen was funded partially by the National Science Foundation of China (61203314). References Daniel M. Bikel. 2004. On the parameter space of generative lexicalized statistical parsing models. Ph.D. thesis, University of Pennsylvania. Bernd Bohnet and Joakim Nivre. 2012. A transitionbased system for joint part-of-speech tagging and labeled non-projective dependency parsing. In Proceedings of EMNLP, pages 12–14, Jeju Island, Korea. Xavier Carreras, Michael Collins, and Terry Koo. 2008. Tag, dynamic programming, and the perceptron for efficient, feature-rich parsing. In Proceedings of CoNLL, pages 9–16, Manchester, England. Eugune Charniak and Mark Johnson. 2005. Coarseto-fine n-best parsing and maxent discriminative reranking. In Proceedings of ACL, pages 173–180. Eugune Charniak. 2000. A maximum-entropyinspired parser. In Proceedings of NAACL, pages 132–139, Seattle, Washington, USA. Wenliang Chen, Junichi Kazama, Kiyotaka Uchimoto, and Kentaro Torisawa. 2009. Improving dependency parsing with subtrees from auto-parsed data. In Proceedings of EMNLP, pages 570–579, Singapore. Wenliang Chen, Min Zhang, and Haizhou Li. 2012. Utilizing dependency language models for graphbased dependency. In Proceedings of ACL, pages 213–222, Jeju, Republic of Korea. Michael Collins and Brian Roark. 2004. Incremental parsing with the perceptron algorithm. In Proceedings of ACL, Stroudsburg, PA, USA. Michael Collins. 1997. Three generative, lexicalised models for statistical parsing. In Proceedings of ACL, Madrid, Spain. Michael Collins. 1999. Head-driven statistical models for natural language parsing. Ph.D. thesis, University of Pennsylvania. Michael Collins. 2000. Discriminative reranking for natural language processing. In Proceedings of ICML, pages 175–182, Stanford, CA, USA. Hal Daume III. 2006. Practical Structured Learning for Natural Language Processing. Ph.D. thesis, USC. Zhongqiang Huang and Mary Harper. 2009. Selftraining PCFG grammars with latent annotations 442 across languages. In Proceedings of EMNLP, pages 832–841, Singapore. Liang Huang and Kenji Sagae. 2010. Dynamic programming for linear-time incremental parsing. In Proceedings of ACL, pages 1077–1086, Uppsala, Sweden. Zhongqiang Huang, Mary Harper, and Slav Petrov. 2010. Self-training with products of latent variable grammars. In Proceedings of EMNLP, pages 12–22, Massachusetts, USA. Liang Huang. 2008. Forest reranking: discriminative parsing with non-local features. In Proceedings of ACL, pages 586–594, Ohio, USA. Liang-Ya Huang. 2009. Improve Chinese parsing with Max-Ent reranking parser. In Master Project Report, Brown University. Terry Koo, Xavier Carreras, and Michael Collins. 2008. Simple semi-supervised dependency parsing. In Proceedings of ACL. J. Lafferty, A. McCallum, and F. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of ICML, pages 282–289, Massachusetts, USA, June. Percy Liang. 2005. Semi-supervised learning for natural language. Master’s thesis, Massachusetts Institute of Technology. Mitchell P. Marcus, Beatrice Santorini, and Mary A. Marcinkiewiz. 1993. Building a large annotated corpus of English. Computational Linguistics, 19(2):313–330. David McClosky, Eugene Charniak, and Mark Johnson. 2006. Effective self-training for parsing. In Proceedings of the HLT/NAACL, Main Conference, pages 152–159, New York City, USA, June. Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Online large-margin training of dependency parsers. In Proceedings of ACL, pages 91– 98, Ann Arbor, Michigan, June. Joakim Nivre, Johan Hall, and Jens Nilsson. 2006. Maltparser: a data-driven parser-generator for dependency parsing. In Proceedings of LREC, pages 2216–2219. Slav Petrov and Dan Klein. 2007. Improved inference for unlexicalized parsing. In Proceedings of HLT/NAACL, pages 404–411, Rochester, New York, April. Adwait Ratnaparkhi. 1997. A linear observed time statistical parser based on maximum entropy models. In Proceedings of EMNLP, Rhode Island, USA. Kenji Sagae and Alon Lavie. 2005. A classifier-based parser with linear run-time complexity. In Proceedings of IWPT, pages 125–132, Vancouver, Canada. Kenji Sagae and Alon Lavie. 2006. Parser combination by reparsing. In Proceedings of HLT/NAACL, Companion Volume: Short Papers, pages 129–132, New York, USA. Libin Shen, Jinxi Xu, and Ralph Weischedel. 2008. A new string-to-dependency machine translation algorithm with a target dependency language model. In Proceedings of ACL, pages 577–585, Ohio, USA. Weiwei Sun and Hans Uszkoreit. 2012. Capturing paradigmatic and syntagmatic lexical relations: towards accurate Chinese part-of-speech tagging. In Proceedings of ACL, Jeju, Republic of Korea. Nianwen Xue, Fei Xia, Fu dong Chiou, and Martha Palmer. 2005. The Penn Chinese Treebank: phrase structure annotation of a large corpus. Natural Language Engineering, 11(2):207–238. Hiroyasu Yamada and Yuji Matsumoto. 2003. Statistical dependency analysis with support vector machines. In Proceedings of IWPT, pages 195–206, Nancy, France. Yue Zhang and Stephen Clark. 2008. Joint word segmentation and POS tagging using a single perceptron. In Proceedings of ACL/HLT, pages 888–896, Columbus, Ohio. Yue Zhang and Stephen Clark. 2009. Transition-based parsing of the Chinese Treebank using a global discriminative model. In Proceedings of IWPT, Paris, France, October. Yue Zhang and Joakim Nivre. 2011. Transition-based dependency parsing with rich non-local features. In Proceedings of ACL, pages 188–193, Portland, Oregon, USA. Muhua Zhu, Jingbo Zhu, and Huizhen Wang. 2012. Exploiting lexical dependencies from large-scale data for better shift-reduce constituency parsing. In Proceedings of COLING, pages 3171–3186, Mumbai, India. 443
2013
43
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 444–454, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Nonconvex Global Optimization for Latent-Variable Models∗ Matthew R. Gormley Jason Eisner Department of Computer Science Johns Hopkins University, Baltimore, MD {mrg,jason}@cs.jhu.edu Abstract Many models in NLP involve latent variables, such as unknown parses, tags, or alignments. Finding the optimal model parameters is then usually a difficult nonconvex optimization problem. The usual practice is to settle for local optimization methods such as EM or gradient ascent. We explore how one might instead search for a global optimum in parameter space, using branch-and-bound. Our method would eventually find the global maximum (up to a user-specified ϵ) if run for long enough, but at any point can return a suboptimal solution together with an upper bound on the global maximum. As an illustrative case, we study a generative model for dependency parsing. We search for the maximum-likelihood model parameters and corpus parse, subject to posterior constraints. We show how to formulate this as a mixed integer quadratic programming problem with nonlinear constraints. We use the Reformulation Linearization Technique to produce convex relaxations during branch-and-bound. Although these techniques do not yet provide a practical solution to our instance of this NP-hard problem, they sometimes find better solutions than Viterbi EM with random restarts, in the same time. 1 Introduction Rich models with latent linguistic variables are popular in computational linguistics, but in general it is not known how to find their optimal parameters. In this paper, we present some “new” attacks for this common optimization setting, drawn from the mathematical programming toolbox. We focus on the well-studied but unsolved task of unsupervised dependency parsing (i.e., depen∗This research was partially funded by the JHU Human Language Technology Center of Excellence. -180.2 -231.0 -254.3 -387.1 -287.3 -311.1 -467.5 -298 -342 !5 " -0.6 -0.6 " !5 !5 " -2 -2 " !5 !3 " -0.6 -0.6 " !3 !2 " -0.6 -0.6 " !2 Branch-and-bound tree: Incumbent solution: $ $ M ￿ m=1 θmf·m = −400 θ = [−0.1, −0.6, −20, −1.3, . . . ] f = [4, 0, 2, 21, . . .] M ￿ i=1 θmfm = −400 Figure 1: Each node contains a local upper bound for its subspace, computed by a relaxation. The node branches on a single model parameter θm to partition its subspace. The lower bound, -400, is given by the best solution seen so far, the incumbent. The upper bound, -298, is the min of all remaining leaf nodes. The node with a local bound of -467.5 can be pruned because no solution within its subspace could be better than the incumbent. dency grammar induction). This may be a particularly hard case, but its structure is typical. Many parameter estimation techniques have been attempted, including expectation-maximization (EM) (Klein and Manning, 2004; Spitkovsky et al., 2010a), contrastive estimation (Smith and Eisner, 2006; Smith, 2006), Viterbi EM (Spitkovsky et al., 2010b), and variational EM (Naseem et al., 2010; Cohen et al., 2009; Cohen and Smith, 2009). These are all local search techniques, which improve the parameters by hill-climbing. The problem with local search is that it gets stuck in local optima. This is evident for grammar induction. An algorithm such as EM will find numerous different solutions when randomly initialized to different points (Charniak, 1993; Smith, 2006). A variety of ways to find better local optima have been explored, including heuristic initialization of the model parameters (Spitkovsky et al., 2010a), random restarts (Smith, 2006), and annealing (Smith and Eisner, 2006; Smith, 2006). Others have achieved accuracy improvements by enforcing linguistically motivated posterior constraints on the parameters (Gillenwater et al., 2010; Naseem et al., 2010), such as requiring most sentences to have verbs or encouraging nouns to be children of verbs or prepositions. We introduce a method that performs global 444 search with certificates of ϵ-optimality for both the corpus parse and the model parameters. Our search objective is log-likelihood. We can also impose posterior constraints on the latent structure. As we show, maximizing the joint loglikelihood of the parses and the parameters can be formulated as a mathematical program (MP) with a nonconvex quadratic objective and with integer linear and nonlinear constraints. Note that this objective is that of hard (Viterbi) EM—we do not marginalize over the parses as in classical EM.1 To globally optimize the objective function, we employ a branch-and-bound algorithm that searches the continuous space of the model parameters by branching on individual parameters (see Figure 1). Thus, our branch-and-bound tree serves to recursively subdivide the global parameter hypercube. Each node represents a search problem over one of the resulting boxes (i.e., orthotopes). The crucial step is to prune nodes high in the tree by determining that their boxes cannot contain the global maximum. We compute an upper bound at each node by solving a relaxed maximization problem tailored to its box. If this upper bound is worse than our current best solution, we can prune the node. If not, we split the box again via another branching decision and retry on the two halves. At each node, our relaxation derives a linear programming problem (LP) that can be efficiently solved by the dual simplex method. First, we linearly relax the constraints that grammar rule probabilities sum to 1—these constraints are nonlinear in our parameters, which are log-probabilities. Second, we linearize the quadratic objective by applying the Reformulation Linearization Technique (RLT) (Sherali and Adams, 1990), a method of forming tight linear relaxations of various types of MPs: the reformulation step multiplies together pairs of the original linear constraints to generate new quadratic constraints, and then the linearization step replaces quadratic terms in the new constraints with auxiliary variables. Finally, if the node is not pruned, we search for a better incumbent solution under that node by projecting the solution of the RLT relaxation back onto the feasible region. In the relaxation, the model parameters might sum to slightly more than 1This objective might not be a great sacrifice: Spitkovsky et al. (2010b) present evidence that hard EM can outperform soft EM for grammar induction in a hill-climbing setting. We use it because it is a quadratic objective. However, maximizing it remains NP-hard (Cohen and Smith, 2010). one and the parses can consist of fractional dependency edges. We project in order to compute the true objective and compare with other solutions. Our results demonstrate that our method can obtain higher likelihoods than Viterbi EM with random restarts. Furthermore, we show how posterior constraints inspired by Gillenwater et al. (2010) and Naseem et al. (2010) can easily be applied in our framework to obtain competitive accuracies using a simple model, the Dependency Model with Valence (Klein and Manning, 2004). We also obtain an ϵ-optimal solution on a toy dataset. We caution that the linear relaxations are very loose on larger boxes. Since we have many dimensions, the binary branch-and-bound tree may have to grow quite deep before the boxes become small enough to prune. This is why nonconvex quadratic optimization by LP-based branch-and-bound usually fails with more than 80 variables (Burer and Vandenbussche, 2009). Even our smallest (toy) problems have hundreds of variables, so our experimental results mainly just illuminate the method’s behavior. Nonetheless, we offer the method as a new tool which, just as for local search, might be combined with other forms of problem-specific guidance to produce more practical results. 2 The Constrained Optimization Task We begin by describing how for our typical model, the Viterbi EM objective can be formulated as a mixed integer quadratic programming (MIQP) problem with nonlinear constraints (Figure 2). Other locally normalized log-linear generative models (Berg-Kirkpatrick et al., 2010) would have a similar formulation. In such models, the loglikelihood objective is simply a linear function of the feature counts. However, the objective becomes quadratic in unsupervised learning, because the feature counts are themselves unknown variables to be optimized. The feature counts are constrained to be derived from the latent variables (e.g., parses), which are unknown discrete structures that must be encoded with integer variables. The nonlinear constraints ensure that the model parameters are true log-probabilities. Concretely, (1) specifies the Viterbi EM objective: the total log-probability of the best parse trees under the parameters θ, given by a sum of log-probabilities θm of the individual steps needed to generate the tree, as encoded by the features fm. The (nonlinear) sum-to-one constraints on the 445 Variables: θm Log-probability for feature m fm Corpus-wide feature count for m esij Indicator of an arc from i to j in tree s Indices and constants: m Feature / model parameter index s Sentence index c Conditional distribution index M Number of model parameters C Number of conditional distributions Mc cth Set of feature indices that sum to 1.0 S Number of sentences Ns Number of words in the sth sentence Objective and constraints: max X m θmfm (1) s.t. X m∈Mc exp(θm) = 1, ∀c (2) A f e  ≤b (Model constraints) (3) θm ≤0, fm, esij ∈Z, ∀m, s, i, j (4) Figure 2: Viterbi EM as a mathematical program probabilities are in (2). The linear constraints in (3) will ensure that the arc variables for each sentence es encode a valid latent dependency tree, and that the f variables count up the features of these trees. The final constraints (4) simply specify the range of possible values for the model parameters and their integer count variables. Our experiments use the dependency model with valence (DMV) (Klein and Manning, 2004). This generative model defines a joint distribution over the sentences and their dependency trees. We encode the DMV using integer linear constraints on the arc variables e and feature counts f. These will constitute the model constraints in (3). The constraints must declaratively specify that the arcs form a valid dependency tree and that the resulting feature values are as defined by the DMV. Tree Constraints To ensure that our arc variables, es, form a dependency tree, we employ the same single-commodity flow constraints of Magnanti and Wolsey (1994) as adapted by Martins et al. (2009) for parsing. We also use the projectivity constraints of Martins et al. (2009). The single-commodity flow constraints simultaneously enforce that each node has exactly one parent, the special root node (position 0) has no incoming arcs, and the arcs form a connected graph. For each sentence, s, the variable φsij indicates the amount of flow traversing the arc from i to j in sentence s. The constraints below specify that the root node emits Ns units of flow (5), that one unit of flow is consumed by each each node (6), that the flow is zero on each disabled arc (7), and that the arcs are binary variables (8). Single-commodity flow (Magnanti & Wolsey, 1994) Ns X j=1 φs0j = Ns, ∀j (5) Ns X i=0 φsij − Ns X k=1 φsjk = 1, ∀j (6) φsij ≤Nsesij, ∀i, j (7) esij ∈{0, 1}, ∀i, j (8) Projectivity is enforced by adding a constraint (9) for each arc ensuring that no edges will cross that arc if it is enabled. Xij is the set of arcs (k, l) that cross the arc (i, j). Projectivity (Martins et al., 2009) X (k,l)∈Xij eskl ≤Ns(1 −esij), ∀s, i, j (9) DMV Feature Counts The DMV generates a dependency tree recursively as follows. First the head word of the sentence is generated, t ∼ Discrete(θroot), where θroot is a subvector of θ. To generate its children on the left side, we flip a coin to decide whether an adjacent child is generated, d ∼Bernoulli(θdec.L.0,t). If the coin flip d comes up continue, we sample the word of that child as t′ ∼Discrete(θchild.L,t). We continue generating non-adjacent children in this way, using coin weights θdec.L.≥1,t until the coin comes up stop. We repeat this procedure to generate children on the right side, using the model parameters θdec.R.0,t, θchild.R,t, and θdec.R.≥1,t. For each new child, we apply this process recursively to generate its descendants. The feature count variables for the DMV are encoded in our MP as various sums over the edge variables. We begin with the root/child feature counts. The constraint (10) defines the feature count for model parameter θroot,t as the number of all enabled arcs connecting the root node to a word of type t, summing over all sentences s. The constraint in (11) similarly defines fchild.L,t,t′ to be the number of enabled arcs connecting a parent of 446 type t to a left child of type t′. Wst is the index set of tokens in sentences s with word type t. DMV root/child feature counts froot,t = Ns X s=1 X j∈Wst es0j, ∀t (10) fchild.L,t,t′ = Ns X s=1 X j<i δ h i∈Wst ∧ j∈Wst′ i esij, ∀t, t′ (11) The decision feature counts require the addition of an auxiliary count variables f(si) m ∈Z indicating how many times decision feature m fired at some position in the corpus s, i. We then need only add a constraint that the corpus wide feature count is the sum of these token-level feature counts fm = PS s=1 PNs i=1 f(si) m , ∀m. Below we define these auxiliary variables for 1 ≤s ≤S and 1 ≤i ≤Ns. The helper variable ns,i,l counts the number of enabled arcs to the left of token i in sentence s. Let t denote the word type of token i in sentence s. Constraints (11) (16) are defined analogously for the right side feature counts. DMV decision feature counts ns,i,l = i−1 X j=1 esij (12) ns,i,l/Ns ≤f(s,i) dec.L.0,t,cont ≤1 (13) f(s,i) dec.L.0,t,stop = 1 −f(s,i) dec.L.0,t,cont (14) f(s,i) dec.L.≥1,t,stop = f(s,i) dec.L.0,t,cont (15) f(s,i) dec.L.≥1,t,cont = ns,i,l −f(s,i) dec.L.0,t,cont (16) 3 A Branch-and-Bound Algorithm The mixed integer quadratic program with nonlinear constraints, given in the previous section, maximizes the nonconvex Viterbi EM objective and is NP-hard to solve (Cohen and Smith, 2010). The standard approach to optimizing this program is local search by the hard (Viterbi) EM algorithm. Yet local search can only provide a lower (pessimistic) bound on the global maximum. We propose a branch-and-bound algorithm, which will iteratively tighten both pessimistic and optimistic bounds on the optimal solution. This algorithm may be halted at any time, to obtain the best current solution and a bound on how much better the global optimum could be. A feasible solution is an assignment to all the variables—both model parameters and corpus parse—that satisfies all constraints. Our branchand-bound algorithm maintains an incumbent solution: the best known feasible solution according to the objective function. This is updated as better feasible solutions are found. Our algorithm implicitly defines a search tree in which each node corresponds to a region of model parameter space. Our search procedure begins with only the root node, which represents the full model parameter space. At each node we perform three steps: bounding, projecting, and branching. In the bounding step, we solve a relaxation of the original problem to provide an upper bound on the objective achievable within that node’s subregion. A node is pruned when Lglobal + ϵ|Lglobal| ≥ Ulocal, where Lglobal is the incumbent score, Ulocal is the upper bound for the node, and ϵ > 0. This ensures that its entire subregion will not yield a ϵ-better solution than the current incumbent. The overall optimistic bound is given by the worst optimistic bound of all current leaf nodes. The projecting step, if the node is not pruned, projects the solution of the relaxation back to the feasible region, replacing the current incumbent if this projection provides a better lower bound. In the branching step, we choose a variable θm on which to divide. Each of the child nodes receives a lower θmin m and upper θmax m bound for θm. The child subspaces partition the parent subspace. The search tree is defined by a variable ordering and the splitting procedure. We do binary branching on the variable θm with the highest regret, defined as zm −θmfm, where zm is the auxiliary objective variable we will introduce in § 4.2. Since θm is a log-probability, we split its current range at the midpoint in probability space, log((exp θmin m + exp θmax m )/2). We perform best-first search, ordering the nodes by the the optimistic bound of their parent. We also use the LP-guided rule (Martin, 2000; Achterberg, 2007, section 6.1) to perform depth-first plunges in search of better incumbents. 4 Relaxations The relaxation in the bounding step computes an optimistic bound for a subspace of the model parameters. This upper bound would ideally be not much greater than the true maximum achievable on that region, but looser upper bounds are generally faster to compute. 447 We present successive relaxations to the original nonconvex mixed integer quadratic program with nonlinear constraints from (1)–(4). First, we show how the nonlinear sum-to-one constraints can be relaxed into linear constraints and tightened. Second, we apply a classic approach to bound the nonconvex quadratic objective by a linear concave envelope. Finally, we present our full relaxation based on the Reformulation Linearization Technique (RLT) (Sherali and Adams, 1990). We solve these LPs by the dual simplex algorithm. 4.1 Relaxing the sum-to-one constraint In this section, we use cutting planes to create a linear relaxation for the sum-to-one constraint (2). When relaxing a constraint, we must ensure that any assignment of the variables that was feasible (i.e. respected the constraints) in the original problem must also be feasible in the relaxation. In most cases, the relaxation is not perfectly tight and so will have an enlarged space of feasible solutions. We begin by weakening constraint (2) to X m∈Mc exp(θm) ≤1 (17) The optimal solution under (17) still satisfies the original equality constraint (2) because of the maximization. We now relax (17) by approximating the surface z = P m∈Mc exp(θm) by the max of N lower-bounding linear functions on R|Mc|. Instead of requiring z ≤1, we only require each of these lower bounds to be ≤1, slightly enlarging the feasible space into a convex polytope. Figure 3a shows the feasible region constructed from N=3 linear functions on two logprobabilities θ1, θ2. Formally, for each c, we define the ith linear lower bound (i = 1, . . . , N) to be the tangent hyperplane at some point ˆθ (i) c = [ˆθ(i) c,1, . . . , ˆθ(i) c,|Mc|] ∈ R|Mc|, where each coordinate is a log-probability ˆθ(i) c,m < 0. We require each of these linear functions to be ≤1: Sum-to-one Relaxation X m∈Mc  θm + 1 −ˆθ(i) c,m  exp  ˆθ(i) c,m  ≤1, ∀i, ∀c (18) 4.2 “Relaxing” the objective Our true maximization objective P m θmfm in (1) is a sum of quadratic terms. If the parameters θ (a) 0 1 2 3 4 fsm ￿15 ￿10 ￿5 Θm ￿60 ￿40 ￿20 0 20 (b) Figure 3: In (a), the area under the curve corresponds to those points (θ1, θ2) that satisfy (17) (z ≤1), with equality (2) achieved along the curve (z = 1). The shaded area shows the enlarged feasible region under the linear relaxation. In (b), the curved lower surface represents a single product term in the objective. The piecewise-linear upper surface is its concave envelope (raised by 20 for illustration; in reality they touch). were fixed, the objective would become linear in the latent features. Although the parameters are not fixed, the branch-and-bound algorithm does box them into a small region, where the quadratic objective is “more linear.” Since it is easy to maximize a concave function, we will maximize the concave envelope—the concave function that most tightly upper-bounds our objective over the region. This turns out to be piecewise linear and can be maximized with an LP solver. Smaller regions yield tighter bounds. Each node of the branch-and-bound tree specifies a region via bounds constraints θmin m < θm < θmax m , ∀m. In addition, we have known bounds fmin m ≤fm ≤fmax m , ∀m for the count variables. McCormick (1976) described the concave envelope for a single quadratic term subject to bounds constraints (Figure 3b). In our case: θmfm ≤min[fmax m θm + θmin m fm −θmin m fmax m , fmin m θm + θmax m fm −θmax m fmin m ] We replace our objective P m θmfm with P m zm, where we would like to constrain each auxiliary variable zm to be = θmfm or (equivalently) ≤ θmfm, but instead settle for making it ≤the concave envelope—a linear programming problem: Concave Envelope Objective max X m zm (19) s.t. zm ≤fmax m θm + θmin m fm −θmin m fmax m (20) zm ≤fmin m θm + θmax m fm −θmax m fmin m (21) 448 4.3 Reformulation Linearization Technique The Reformulation Linearization Technique (RLT)2 (Sherali and Adams, 1990) is a method of forming tighter relaxations of various types of MPs. The basic method reformulates the problem by adding products of existing constraints. The quadratic terms in the objective and in these new constraints are redefined as auxiliary variables, thereby linearizing the program. In this section, we will show how the RLT can be applied to our grammar induction problem and contrast it with the concave envelope relaxation presented in section 4.2. Consider the original MP in equations (1) (4), with the nonlinear sum-to-one constraints in (2) replaced by our linear constraints proposed in (18). If we remove the integer constraints in (4), the result is a quadratic program with purely linear constraints. Such problems have the form max xT Qx (22) s.t. Ax ≤b (23) −∞< Li ≤xi ≤Ui < ∞, ∀i (24) where the variables are x ∈Rn, A is an m × n matrix, and b ∈Rm, and Q is an n × n indefinite3 matrix. Without loss of generality we assume Q is symmetric. The application of the RLT here was first considered by Sherali and Tuncbilek (1995). For convenience of presentation, we represent both the linear inequality constraints and the bounds constraints, under a different parameterization using the matrix G and vector g. " (bi −Aix) ≥0, 1 ≤i ≤m (Uk −xk) ≥0, 1 ≤k ≤n (−Lk + xk) ≥0, 1 ≤k ≤n # ≡ h(gi −Gix) ≥0, 1 ≤i ≤m + 2n i The reformulation step forms all possible products of these linear constraints and then adds them to the original quadratic program. (gi −Gix)(gj −Gjx) ≥0, ∀1 ≤i ≤j ≤m + 2n In the linearization step, we replace all quadratic terms in the quadratic objective and new quadratic constraints with auxiliary variables: wij ≡xixj, ∀1 ≤i ≤j ≤n 2The key idea underlying the RLT was originally introduced in Adams and Sherali (1986) for 0-1 quadratic programming. It has since been extended to various other settings; see Sherali and Liberti (2008) for a complete survey. 3In the general case, that Q is indefinite causes this program to be nonconvex, making this problem NP-hard to solve (Vavasis, 1991; Pardalos, 1991). This yields the following RLT relaxation: RLT Relaxation max X 1≤i≤j≤n Qijwij (25) s.t. gigj − n X k=1 gjGikxk − n X k=1 giGjkxk + n X k=1 n X l=1 GikGjlwkl ≥0, ∀1 ≤i ≤j ≤m + 2n (26) Notice above that we have omitted the original inequality constraints (23) and bounds (24), because they are fully enforced by the new RLT constraints (26) from the reformulation step (Sherali and Tuncbilek, 1995). In our experiments, we keep the original constraints and instead explore subsets of the RLT constraints. If the original QP contains equality constraints of the form Gex = ge, then we can form constraints by multiplying this one by each variable xi. This gives us the following new set of constraints, for each equality constraint e: gexi + Pn j=1 −Gejwij = 0, ∀1 ≤i ≤n. Theoretical Properties The new constraints in eq. (26) will impose the concave envelope constraints (20)–(21) (Anstreicher, 2009). The constraints presented above are considered to be first-level constraints corresponding to the first-level variables wij. However, the same technique can be applied repeatedly to produce polynomial constraints of higher degree. These higher level constraints/variables have been shown to provide increasingly tighter relaxations (Sherali and Adams, 1990) at the cost of a large number of variables and constraints. In the case where x ∈{0, 1}n the degree-n RLT constraints will restrict to the convex hull of the feasible solutions (Sherali and Adams, 1990). This is in direct contrast to the concave envelope relaxation presented in section 4.2 which relaxes to the convex hull of each quadratic term independently. This demonstrates the key intuition of the RLT relaxation: The products of constraints are implied (and unnecessary) in the original variable space. Yet when we project to a higherdimentional space by including the auxiliary variables, the linearized constraints cut off portions of the feasible region given by only the concave envelope relaxation in eqs. (20)-(21) . 449 4.4 Adding Posterior Constraints It is a simple extension to impose posterior constraints within our framework. Here we emphasize constraints that are analogous to the universal linguistic constraints from Naseem et al. (2010). Since we optimize the Viterbi EM objective, we directly constrain the counts in the single corpus parse rather than expected counts from a distribution over parses. Let E be the index set of model parameters corresponding to edge types from Table 1 of Naseem et al. (2010), and Ns be the number of words in the sth sentence. We impose the constraint that 75% of edges come from E: P m∈E fm ≥0.75 PS s=1 Ns  . 5 Projections A pessimistic bound, from the projecting step, will correspond to a feasible but not necessarily optimal solution to the original problem. We propose several methods for obtaining pessimistic bounds during the branch-and-bound search, by projecting and improving the solutions found by the relaxation. A solution to the relaxation may be infeasible in the original problem for two reasons: the model parameters might not sum to one, and/or the parse may contain fractional edges. Model Parameters For each set of model parameters Mc that should sum-to-one, we project the model parameters onto the Mc −1 simplex by one of two methods: (1) normalize the infeasible parameters or (2) find the point on the simplex that has minimum Euclidean distance to the infeasible parameters using the algorithm of Chen and Ye (2011). For both methods, we can optionally apply add-λ smoothing before projecting. Parses Since we are interested in projecting the fractional parse onto the space of projective spanning trees, we can simply employ a dynamic programming parsing algorithm (Eisner and Satta, 1999) where the weight of each edge is given as the fraction of the edge variable. Only one of these projection techniques is needed. We then either use parsing to fill in the optimal parse trees given the projected model parameters, or use supervised parameter estimation to fill in the optimal model parameters given the projected parses. These correspond to the Viterbi E step and M step, respectively. We can locally improve the projected solution by continuing with a few additional iterations of Viterbi EM. Related models could use very similar projection techniques. Given a relaxed joint solution to the parameters and the latent variables, one must be able to project it to a nearby feasible one, by projecting either the fractional parameters or the fractional latent variables into the feasible space and then solving exactly for the other. 6 Related Work The goal of this work was to better understand and address the non-convexity of maximum-likelihood training with latent variables, especially parses. Gimpel and Smith (2012) proposed a concave model for unsupervised dependency parsing using IBM Model 1. This model did not include a tree constraint, but instead initialized EM on the DMV. By contrast, our approach incorporates the tree constraints directly into our convex relaxation and embeds the relaxation in a branch-and-bound algorithm capable of solving the original DMV maximum-likelihood estimation problem. Spectral learning constitutes a wholly different family of consistent estimators, which achieve efficiency because they sidestep maximizing the nonconvex likelihood function. Hsu et al. (2009) introduced a spectral learner for a large class of HMMs. For supervised parsing, spectral learning has been used to learn latent variable PCFGs (Cohen et al., 2012) and hidden-state dependency grammars (Luque et al., 2012). Alas, there are not yet any spectral learning methods that recover latent tree structure, as in grammar induction. Several integer linear programming (ILP) formulations of dependency parsing (Riedel and Clarke, 2006; Martins et al., 2009; Riedel et al., 2012) inspired our definition of grammar induction as a MP. Recent work uses branch-and-bound for decoding with non-local features (Qian and Liu, 2013). These differ from our work by treating the model parameters as constants, thereby yielding a linear objective. For semi-supervised dependency parsing, Wang et al. (2008) used a convex objective, combining unsupervised least squares loss and a supervised large margin loss, This does not apply to our unsupervised setting. Branch-and-bound has also been applied to semi-supervised SVM training, a nonconvex search problem (Chapelle et al., 2007), with a relaxation derived from the dual. 450 7 Experiments We first analyze the behavior of our method on a toy synthetic dataset. Next, we compare various parameter settings for branch-and-bound by estimating the total solution time. Finally, we compare our search method to Viterbi EM on a small subset of the Penn Treebank. All our experiments use the DMV for unsupervised dependency parsing of part-of-speech (POS) tag sequences. For Viterbi EM we initialize the parameters of the model uniformly, breaking parser ties randomly in the first E-step (Spitkovsky et al., 2010b). This initializer is state-of-the-art for Viterbi EM. We also apply add-one smoothing during each M-step. We use random restarts, and select the model with the highest likelihood. We add posterior constraints to Viterbi EM’s Estep. First, we run a relaxed linear programming (LP) parser, then project the (possibly fractional) parses back to the feasible region. If the resulting parse does not respect the posterior constraints, we discard it. The posterior constraint in the LP parser is tighter4 than the one used in the true optimization problem, so the projections tends to be feasible under the true (looser) posterior constraints. In our experiments, all but one projection respected the constraints. We solve all LPs with CPLEX. 7.1 Synthetic Data For our toy example, we generate sentences from a synthetic DMV over three POS tags (Verb, Noun, Adjective) with parameters chosen to favor short sentences with English word order. In Figure 4 we show that the quality of the root relaxation increases as we approach the full set of RLT constraints. That the number of possible RLT constraints increases quadratically with the length of the corpus poses a serious challenge. For just 20 sentences from this synthetic model, the RLT generates 4,056,498 constraints. For a single run of branch-and-bound, Figure 5 shows the global upper and lower bounds over time.5 We consider five relaxations, each using only a subset of the RLT constraints. Max.0k uses only the concave envelope (20)-(21). Max.1k uses the concave envelope and also randomly samples 1,000 other RLT constraints, and so on for Max.10k and Max.100k. Obj.Filter includes all 480% of edges must come from E as opposed to 75%. 5The initial incumbent solution for branch-and-bound is obtained by running Viterbi EM with 10 random restarts. !4 !3 !2 !1 0 ! ! ! ! ! ! ! ! ! ! ! 0.0 0.2 0.4 0.6 0.8 1.0 Proportion of RLT rows included Upper bound on log!likelihood at root Figure 4: The bound quality at the root improves as the proportion of RLT constraints increases, on 5 synthetic sentences. A random subset of 70% of the 320,126 possible RLT constraints matches the relaxation quality of the full set. This bound is very tight: the relaxations in Figure 5 solve hundreds of nodes before such a bound is achieved. !12 !10 !8 !6 !4 !2 0 ! ! ! !!! ! ! ! !!! 20 40 6080 Time (sec) Bounds on log!likelihood Bound type lower upper Relaxation ! RLT Obj.Filter RLT Max.0k RLT Max.1k RLT Max.10k Figure 5: The global upper and lower bounds improve over time for branch-and-bound using different subsets of RLT constraints on 5 synthetic sentences. Each solves the problem to ϵoptimality for ϵ = 0.01. A point marks every 200 nodes processed. (The time axis is log-scaled.) constraints with a nonzero coefficient for one of the RLT variables zm from the linearized objective. The rightmost lines correspond to RLT Max.10k: despite providing the tightest (local) bound at each node, it processed only 110 nodes in the time it took RLT Max.1k to process 1164. RLT Max.0k achieves the best balance of tight bounds and speed per node. 7.2 Comparing branch-and-bound strategies It is prohibitively expensive to repeatedly run our algorithm to completion with a variety of parameter settings. Instead, we estimate the size of the branch-and-bound tree and the solution time using a high-variance estimate that is effective for comparisons (Lobjois and Lemaˆıtre, 1998). Given a fixed set of parameters for our algorithm and an ϵ-optimality stopping criterion, we 451 RLT Relaxation Avg. ms per node # Samples Est. # Nodes Est. # Hours Obj.Filter 63 10000 3.2E+08 4.6E+09 Max.0k 6 10000 1.7E+10 7.8E+10 Max.1k 15 10000 3.5E+08 4.2E+09 Max.10k 161 10000 1.3E+09 3.4E+10 Max.100k 232259 5 1.7E+09 9.7E+13 Table 1: Branch-and-bound node count and completion time estimates. Each standard deviation was close in magnitude to the estimate itself. We ran for 8 hours, stopping at 10,000 samples on 8 synthetic sentences. can view the branch-and-bound tree T as fixed and finite in size. We wish to estimate some cost associated with the tree C(T) = P α∈nodes(T) f(α). Letting f(α) = 1 estimates the number of nodes; if f(α) is the time to solve a node, then we estimate the total solution time using the Monte Carlo method of Knuth (1975). Table 1 gives these estimates, for the same five RLT relaxations. Obj.Filter yields the smallest estimated tree size. 7.3 Real Data In this section, we compare our global search method to Viterbi EM with random restarts each with or without posterior constraints. We use 200 sentences of no more than 10 tokens from the WSJ portion of the Penn Treebank. We reduce the treebank’s gold part-of-speech (POS) tags to a universal set of 12 tags (Petrov et al., 2012) plus a tag for auxiliaries, ignoring punctuation. Each search method is run for 8 hours. We obtain the initial incumbent solution for branch-and-bound by running Viterbi EM for 45 minutes. The average time to solve a node’s relaxation ranges from 3 seconds for RLT Max.0k to 42 seconds for RLT Max.100k. Figure 6a shows the log-likelihood of the incumbent solution over time. In our global search method, like Viterbi EM, the posterior constraints lead to lower log-likelihoods. RLT Max.0k finds the highest log-likelihood solution. Figure 6b compares the unlabeled directed dependency accuracy of the incumbent solution. In both global and local search, the posterior constraints lead to higher accuracies. Viterbi EM with posterior constraints demonstrates the oscillation of incumbent accuracy: starting at 58.02% accuracy, it finds several high accuracy solutions early on (61.02%), but quickly abandons them to increase likelihood, yielding a final accuracy of 60.65%. RLT Max.0k with posterior constraints obtains the highest overall accuracy of 61.09% at (a) !3300 !3200 !3100 !3000 !2900 ! ! ! ! ! !!!!!! !!! !!!!!!!!!!! !! ! !!!!! ! !!!!!! !!! ! !!!!!!!!!!!! ! 100 200 300 400 Time (min) Log!likelihood (train) Algorithm ! Viterbi EM RLT Obj.Filter RLT Max.0k RLT Max.1k RLT Max.10k RLT Max.100k Posterior Constraints False True (b) 0.35 0.40 0.45 0.50 0.55 0.60 ! ! ! ! !!!!!!! ! !! !!!!!!!!!! ! ! !!!!!! ! !!!!!! !! ! ! !!! !!!!!!!!! ! 100 200 300 400 Time (min) Accuracy (train) Algorithm ! Viterbi EM RLT Obj.Filter RLT Max.0k RLT Max.1k RLT Max.10k RLT Max.100k Posterior Constraints False True Figure 6: Likelihood (a) and accuracy (b) of incumbent solution so far, on a small real dataset. 306 min and the highest final accuracy 60.73%. 8 Discussion In principle, our branch-and-bound method can approach ϵ-optimal solutions to Viterbi training of locally normalized generative models, including the NP-hard case of grammar induction with the DMV. The method can also be used with posterior constraints or a regularized objective. Future work includes algorithmic improvements for solving the relaxation and the development of tighter relaxations. The Dantzig-Wolfe decomposition (Dantzig and Wolfe, 1960) or Lagrangian Relaxation (Held and Karp, 1970) might satisfy both of these goals by pushing the integer tree constraints into a subproblem solved by a dynamic programming parser. Recent work on semidefinite relaxations (Anstreicher, 2009) suggests they may provide tighter bounds at the expense of greater computation time. Perhaps even more important than tightening the bounds at each node are search heuristics (e.g., surface cues) and priors (e.g., universal grammar) that guide our global search by deciding which node to expand next (Chomsky and Lasnik, 1993). 452 References Tobias Achterberg. 2007. Constraint integer programming. Ph.D. thesis, TU Berlin. Warren P. Adams and Hanif D. Sherali. 1986. A tight linearization and an algorithm for zero-one quadratic programming problems. Management Science, 32(10):1274–1290, October. ArticleType: research-article / Full publication date: Oct., 1986 / Copyright 1986 INFORMS. Kurt Anstreicher. 2009. Semidefinite programming versus the reformulation-linearization technique for nonconvex quadratically constrained quadratic programming. Journal of Global Optimization, 43(2):471–484. Taylor Berg-Kirkpatrick, Alexandre Bouchard-Cˆot´e, DeNero, John DeNero, and Dan Klein. 2010. Painless unsupervised learning with features. In Proc. of NAACL, June. Samuel Burer and Dieter Vandenbussche. 2009. Globally solving box-constrained nonconvex quadratic programs with semidefinite-based finite branch-andbound. Computational Optimization and Applications, 43(2):181–195. Olivier Chapelle, Vikas Sindhwani, and S. Sathiya Keerthi. 2007. Branch and bound for semisupervised support vector machines. In Proc. of NIPS 19, pages 217–224. MIT Press. E. Charniak. 1993. Statistical language learning. MIT press. Yunmei Chen and Xiaojing Ye. 2011. Projection onto a simplex. arXiv:1101.6081, January. Noam Chomsky and Howard Lasnik. 1993. Principles and parameters theory. In Syntax: An International Handbook of Contemporary Research. Berlin: de Gruyter. Shay Cohen and Noah A. Smith. 2009. Shared logistic normal distributions for soft parameter tying in unsupervised grammar induction. In Proc. of HLTNAACL, pages 74–82, June. Shay Cohen and Noah A. Smith. 2010. Viterbi training for PCFGs: Hardness results and competitiveness of uniform initialization. In Proc. of ACL, pages 1502– 1511, July. S. B. Cohen, K. Gimpel, and N. A. Smith. 2009. Logistic normal priors for unsupervised probabilistic grammar induction. In Proceedings of NIPS. Shay B. Cohen, Karl Stratos, Michael Collins, Dean P. Foster, and Lyle Ungar. 2012. Spectral learning of latent-variable PCFGs. In Proc. of ACL (Volume 1: Long Papers), pages 223–231. Association for Computational Linguistics, July. George B. Dantzig and Philip Wolfe. 1960. Decomposition principle for linear programs. Operations Research, 8(1):101–111, January. Jason Eisner and Giorgio Satta. 1999. Efficient parsing for bilexical context-free grammars and head automaton grammars. In Proc. of ACL, pages 457– 464, June. Jennifer Gillenwater, Kuzman Ganchev, Joo Graa, Fernando Pereira, and Ben Taskar. 2010. Sparsity in dependency grammar induction. In Proceedings of the ACL 2010 Conference Short Papers, pages 194–199. Association for Computational Linguistics, July. K. Gimpel and N. A. Smith. 2012. Concavity and initialization for unsupervised dependency parsing. In Proc. of NAACL. M. Held and R. M. Karp. 1970. The travelingsalesman problem and minimum spanning trees. Operations Research, 18(6):1138–1162. D. Hsu, S. M Kakade, and T. Zhang. 2009. A spectral algorithm for learning hidden markov models. In COLT 2009 - The 22nd Conference on Learning Theory. Dan Klein and Christopher Manning. 2004. Corpusbased induction of syntactic structure: Models of dependency and constituency. In Proc. of ACL, pages 478–485, July. D. E. Knuth. 1975. Estimating the efficiency of backtrack programs. Mathematics of computation, 29(129):121–136. L. Lobjois and M. Lemaˆıtre. 1998. Branch and bound algorithm selection by performance prediction. In Proc. of the National Conference on Artificial Intelligence, pages 353–358. Franco M. Luque, Ariadna Quattoni, Borja Balle, and Xavier Carreras. 2012. Spectral learning for non-deterministic dependency parsing. In Proc. of EACL, pages 409–419, April. Thomas L. Magnanti and Laurence A. Wolsey. 1994. Optimal Trees. Center for Operations Research and Econometrics. Alexander Martin. 2000. Integer programs with block structure. Technical Report SC-99-03, ZIB. Andr´e Martins, Noah A. Smith, and Eric Xing. 2009. Concise integer linear programming formulations for dependency parsing. In Proc. of ACL-IJCNLP, pages 342–350, August. Garth P. McCormick. 1976. Computability of global solutions to factorable nonconvex programs: Part I—Convex underestimating problems. Mathematical Programming, 10(1):147–175. 453 Tahira Naseem, Harr Chen, Regina Barzilay, and Mark Johnson. 2010. Using universal linguistic knowledge to guide grammar induction. In Proc. of EMNLP, pages 1234–1244, October. P. M. Pardalos. 1991. Global optimization algorithms for linearly constrained indefinite quadratic problems. Computers & Mathematics with Applications, 21(6):87–97. Slav Petrov, Dipanjan Das, and Ryan McDonald. 2012. A universal part-of-speech tagset. In Proc. of LREC. Xian Qian and Yang Liu. 2013. Branch and bound algorithm for dependency parsing with non-local features. TACL, 1:37—48. Sebastian Riedel and James Clarke. 2006. Incremental integer linear programming for non-projective dependency parsing. In Proc. of EMNLP, pages 129– 137, July. Sebastian Riedel, David Smith, and Andrew McCallum. 2012. Parse, price and cut—Delayed column and row generation for graph based parsers. In Proc. of EMNLP-CoNLL, pages 732–743, July. Hanif D. Sherali and Warren P. Adams. 1990. A hierarchy of relaxations between the continuous and convex hull representations for zero-one programming problems. SIAM Journal on Discrete Mathematics, 3(3):411–430, August. H. Sherali and L. Liberti. 2008. Reformulationlinearization technique for global optimization. Encyclopedia of Optimization, 2:3263–3268. Hanif D. Sherali and Cihan H. Tuncbilek. 1995. A reformulation-convexification approach for solving nonconvex quadratic programming problems. Journal of Global Optimization, 7(1):1–31. Noah A. Smith and Jason Eisner. 2006. Annealing structural bias in multilingual weighted grammar induction. In Proc. of COLING-ACL, pages 569–576, July. N.A. Smith. 2006. Novel estimation methods for unsupervised discovery of latent structure in natural language text. Ph.D. thesis, Johns Hopkins University, Baltimore, MD. Valentin I Spitkovsky, Hiyan Alshawi, and Daniel Jurafsky. 2010a. From baby steps to leapfrog: How Less is more in unsupervised dependency parsing. In Proc. of HLT-NAACL, pages 751–759. Association for Computational Linguistics, June. Valentin I Spitkovsky, Hiyan Alshawi, Daniel Jurafsky, and Christopher D Manning. 2010b. Viterbi training improves unsupervised dependency parsing. In Proc. of CoNLL, pages 9–17. Association for Computational Linguistics, July. S. A. Vavasis. 1991. Nonlinear optimization: complexity issues. Oxford University Press, Inc. Qin Iris Wang, Dale Schuurmans, and Dekang Lin. 2008. Semi-supervised convex training for dependency parsing. In Proc of ACL-HLT, pages 532–540. Association for Computational Linguistics, June. 454
2013
44
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 455–465, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Parsing with Compositional Vector Grammars Richard Socher John Bauer Christopher D. Manning Andrew Y. Ng Computer Science Department, Stanford University, Stanford, CA 94305, USA [email protected], [email protected], [email protected], [email protected] Abstract Natural language parsing has typically been done with small sets of discrete categories such as NP and VP, but this representation does not capture the full syntactic nor semantic richness of linguistic phrases, and attempts to improve on this by lexicalizing phrases or splitting categories only partly address the problem at the cost of huge feature spaces and sparseness. Instead, we introduce a Compositional Vector Grammar (CVG), which combines PCFGs with a syntactically untied recursive neural network that learns syntactico-semantic, compositional vector representations. The CVG improves the PCFG of the Stanford Parser by 3.8% to obtain an F1 score of 90.4%. It is fast to train and implemented approximately as an efficient reranker it is about 20% faster than the current Stanford factored parser. The CVG learns a soft notion of head words and improves performance on the types of ambiguities that require semantic information such as PP attachments. 1 Introduction Syntactic parsing is a central task in natural language processing because of its importance in mediating between linguistic expression and meaning. For example, much work has shown the usefulness of syntactic representations for subsequent tasks such as relation extraction, semantic role labeling (Gildea and Palmer, 2002) and paraphrase detection (Callison-Burch, 2008). Syntactic descriptions standardly use coarse discrete categories such as NP for noun phrases or PP for prepositional phrases. However, recent work has shown that parsing results can be greatly improved by defining more fine-grained syntactic (riding,V, ) (a,Det, ) (bike,NN, ) (a bike,NP, ) (riding a bike,VP, ) Discrete Syntactic – Continuous Semantic Representations in the Compositional Vector Grammar Figure 1: Example of a CVG tree with (category,vector) representations at each node. The vectors for nonterminals are computed via a new type of recursive neural network which is conditioned on syntactic categories from a PCFG. categories, which better capture phrases with similar behavior, whether through manual feature engineering (Klein and Manning, 2003a) or automatic learning (Petrov et al., 2006). However, subdividing a category like NP into 30 or 60 subcategories can only provide a very limited representation of phrase meaning and semantic similarity. Two strands of work therefore attempt to go further. First, recent work in discriminative parsing has shown gains from careful engineering of features (Taskar et al., 2004; Finkel et al., 2008). Features in such parsers can be seen as defining effective dimensions of similarity between categories. Second, lexicalized parsers (Collins, 2003; Charniak, 2000) associate each category with a lexical item. This gives a fine-grained notion of semantic similarity, which is useful for tackling problems like ambiguous attachment decisions. However, this approach necessitates complex shrinkage estimation schemes to deal with the sparsity of observations of the lexicalized categories. In many natural language systems, single words and n-grams are usefully described by their distributional similarities (Brown et al., 1992), among many others. But, even with large corpora, many 455 n-grams will never be seen during training, especially when n is large. In these cases, one cannot simply use distributional similarities to represent unseen phrases. In this work, we present a new solution to learn features and phrase representations even for very long, unseen n-grams. We introduce a Compositional Vector Grammar Parser (CVG) for structure prediction. Like the above work on parsing, the model addresses the problem of representing phrases and categories. Unlike them, it jointly learns how to parse and how to represent phrases as both discrete categories and continuous vectors as illustrated in Fig. 1. CVGs combine the advantages of standard probabilistic context free grammars (PCFG) with those of recursive neural networks (RNNs). The former can capture the discrete categorization of phrases into NP or PP while the latter can capture fine-grained syntactic and compositional-semantic information on phrases and words. This information can help in cases where syntactic ambiguity can only be resolved with semantic information, such as in the PP attachment of the two sentences: They ate udon with forks. vs. They ate udon with chicken. Previous RNN-based parsers used the same (tied) weights at all nodes to compute the vector representing a constituent (Socher et al., 2011b). This requires the composition function to be extremely powerful, since it has to combine phrases with different syntactic head words, and it is hard to optimize since the parameters form a very deep neural network. We generalize the fully tied RNN to one with syntactically untied weights. The weights at each node are conditionally dependent on the categories of the child constituents. This allows different composition functions when combining different types of phrases and is shown to result in a large improvement in parsing accuracy. Our compositional distributed representation allows a CVG parser to make accurate parsing decisions and capture similarities between phrases and sentences. Any PCFG-based parser can be improved with an RNN. We use a simplified version of the Stanford Parser (Klein and Manning, 2003a) as the base PCFG and improve its accuracy from 86.56 to 90.44% labeled F1 on all sentences of the WSJ section 23. The code of our parser is available at nlp.stanford.edu. 2 Related Work The CVG is inspired by two lines of research: Enriching PCFG parsers through more diverse sets of discrete states and recursive deep learning models that jointly learn classifiers and continuous feature representations for variable-sized inputs. Improving Discrete Syntactic Representations As mentioned in the introduction, there are several approaches to improving discrete representations for parsing. Klein and Manning (2003a) use manual feature engineering, while Petrov et al. (2006) use a learning algorithm that splits and merges the syntactic categories in order to maximize likelihood on the treebank. Their approach splits categories into several dozen subcategories. Another approach is lexicalized parsers (Collins, 2003; Charniak, 2000) that describe each category with a lexical item, usually the head word. More recently, Hall and Klein (2012) combine several such annotation schemes in a factored parser. We extend the above ideas from discrete representations to richer continuous ones. The CVG can be seen as factoring discrete and continuous parsing in one model. Another different approach to the above generative models is to learn discriminative parsers using many well designed features (Taskar et al., 2004; Finkel et al., 2008). We also borrow ideas from this line of research in that our parser combines the generative PCFG model with discriminatively learned RNNs. Deep Learning and Recursive Deep Learning Early attempts at using neural networks to describe phrases include Elman (1991), who used recurrent neural networks to create representations of sentences from a simple toy grammar and to analyze the linguistic expressiveness of the resulting representations. Words were represented as one-on vectors, which was feasible since the grammar only included a handful of words. Collobert and Weston (2008) showed that neural networks can perform well on sequence labeling language processing tasks while also learning appropriate features. However, their model is lacking in that it cannot represent the recursive structure inherent in natural language. They partially circumvent this problem by using either independent window-based classifiers or a convolutional layer. RNN-specific training was introduced by Goller and K¨uchler (1996) to learn distributed representations of given, structured objects such as logical terms. In contrast, our model both predicts the structure and its representation. 456 Henderson (2003) was the first to show that neural networks can be successfully used for large scale parsing. He introduced a left-corner parser to estimate the probabilities of parsing decisions conditioned on the parsing history. The input to Henderson’s model consists of pairs of frequent words and their part-of-speech (POS) tags. Both the original parsing system and its probabilistic interpretation (Titov and Henderson, 2007) learn features that represent the parsing history and do not provide a principled linguistic representation like our phrase representations. Other related work includes (Henderson, 2004), who discriminatively trains a parser based on synchrony networks and (Titov and Henderson, 2006), who use an SVM to adapt a generative parser to different domains. Costa et al. (2003) apply recursive neural networks to re-rank possible phrase attachments in an incremental parser. Their work is the first to show that RNNs can capture enough information to make correct parsing decisions, but they only test on a subset of 2000 sentences. Menchetti et al. (2005) use RNNs to re-rank different parses. For their results on full sentence parsing, they rerank candidate trees created by the Collins parser (Collins, 2003). Similar to their work, we use the idea of letting discrete categories reduce the search space during inference. We compare to fully tied RNNs in which the same weights are used at every node. Our syntactically untied RNNs outperform them by a significant margin. The idea of untying has also been successfully used in deep learning applied to vision (Le et al., 2010). This paper uses several ideas of (Socher et al., 2011b). The main differences are (i) the dual representation of nodes as discrete categories and vectors, (ii) the combination with a PCFG, and (iii) the syntactic untying of weights based on child categories. We directly compare models with fully tied and untied weights. Another work that represents phrases with a dual discrete-continuous representation is (Kartsaklis et al., 2012). 3 Compositional Vector Grammars This section introduces Compositional Vector Grammars (CVGs), a model to jointly find syntactic structure and capture compositional semantic information. CVGs build on two observations. Firstly, that a lot of the structure and regularity in languages can be captured by well-designed syntactic patterns. Hence, the CVG builds on top of a standard PCFG parser. However, many parsing decisions show fine-grained semantic factors at work. Therefore we combine syntactic and semantic information by giving the parser access to rich syntacticosemantic information in the form of distributional word vectors and compute compositional semantic vector representations for longer phrases (Costa et al., 2003; Menchetti et al., 2005; Socher et al., 2011b). The CVG model merges ideas from both generative models that assume discrete syntactic categories and discriminative models that are trained using continuous vectors. We will first briefly introduce single word vector representations and then describe the CVG objective function, tree scoring and inference. 3.1 Word Vector Representations In most systems that use a vector representation for words, such vectors are based on cooccurrence statistics of each word and its context (Turney and Pantel, 2010). Another line of research to learn distributional word vectors is based on neural language models (Bengio et al., 2003) which jointly learn an embedding of words into an n-dimensional feature space and use these embeddings to predict how suitable a word is in its context. These vector representations capture interesting linear relationships (up to some accuracy), such as king−man+woman ≈queen (Mikolov et al., 2013). Collobert and Weston (2008) introduced a new model to compute such an embedding. The idea is to construct a neural network that outputs high scores for windows that occur in a large unlabeled corpus and low scores for windows where one word is replaced by a random word. When such a network is optimized via gradient ascent the derivatives backpropagate into the word embedding matrix X. In order to predict correct scores the vectors in the matrix capture co-occurrence statistics. For further details and evaluations of these embeddings, see (Turian et al., 2010; Huang et al., 2012). The resulting X matrix is used as follows. Assume we are given a sentence as an ordered list of m words. Each word w has an index [w] = i into the columns of the embedding matrix. This index is used to retrieve the word’s vector representation aw using a simple multiplication with a binary vector e, which is zero everywhere, except 457 at the ith index. So aw = Lei ∈Rn. Henceforth, after mapping each word to its vector, we represent a sentence S as an ordered list of (word,vector) pairs: x = ((w1, aw1), . . . , (wm, awm)). Now that we have discrete and continuous representations for all words, we can continue with the approach for computing tree structures and vectors for nonterminal nodes. 3.2 Max-Margin Training Objective for CVGs The goal of supervised parsing is to learn a function g : X →Y, where X is the set of sentences and Y is the set of all possible labeled binary parse trees. The set of all possible trees for a given sentence xi is defined as Y (xi) and the correct tree for a sentence is yi. We first define a structured margin loss ∆(yi, ˆy) for predicting a tree ˆy for a given correct tree. The loss increases the more incorrect the proposed parse tree is (Goodman, 1998). The discrepancy between trees is measured by counting the number of nodes N(y) with an incorrect span (or label) in the proposed tree: ∆(yi, ˆy) = X d∈N(ˆy) κ1{d /∈N(yi)}. (1) We set κ = 0.1 in all experiments. For a given set of training instances (xi, yi), we search for the function gθ, parameterized by θ, with the smallest expected loss on a new sentence. It has the following form: gθ(x) = arg max ˆy∈Y (x) s(CVG(θ, x, ˆy)), (2) where the tree is found by the Compositional Vector Grammar (CVG) introduced below and then scored via the function s. The higher the score of a tree the more confident the algorithm is that its structure is correct. This max-margin, structureprediction objective (Taskar et al., 2004; Ratliff et al., 2007; Socher et al., 2011b) trains the CVG so that the highest scoring tree will be the correct tree: gθ(xi) = yi and its score will be larger up to a margin to other possible trees ˆy ∈Y(xi): s(CVG(θ, xi, yi)) ≥s(CVG(θ, xi, ˆy)) + ∆(yi, ˆy). This leads to the regularized risk function for m training examples: J(θ) = 1 m m X i=1 ri(θ) + λ 2 ∥θ∥2 2, where ri(θ) = max ˆy∈Y (xi) s(CVG(xi, ˆy)) + ∆(yi, ˆy)  −s(CVG(xi, yi)) (3) Intuitively, to minimize this objective, the score of the correct tree yi is increased and the score of the highest scoring incorrect tree ˆy is decreased. 3.3 Scoring Trees with CVGs For ease of exposition, we first describe how to score an existing fully labeled tree with a standard RNN and then with a CVG. The subsequent section will then describe a bottom-up beam search and its approximation for finding the optimal tree. Assume, for now, we are given a labeled parse tree as shown in Fig. 2. We define the word representations as (vector, POS) pairs: ((a, A), (b, B), (c, C)), where the vectors are defined as in Sec. 3.1 and the POS tags come from a PCFG. The standard RNN essentially ignores all POS tags and syntactic categories and each nonterminal node is associated with the same neural network (i.e., the weights across nodes are fully tied). We can represent the binary tree in Fig. 2 in the form of branching triplets (p →c1c2). Each such triplet denotes that a parent node p has two children and each ck can be either a word vector or a non-terminal node in the tree. For the example in Fig. 2, we would get the triples ((p1 →bc), (p2 →ap1)). Note that in order to replicate the neural network and compute node representations in a bottom up fashion, the parent must have the same dimensionality as the children: p ∈Rn. Given this tree structure, we can now compute activations for each node from the bottom up. We begin by computing the activation for p1 using the children’s word vectors. We first concatenate the children’s representations b, c ∈Rn×1 into a vector  b c  ∈R2n×1. Then the composition function multiplies this vector by the parameter weights of the RNN W ∈Rn×2n and applies an element-wise nonlinearity function f = tanh to the output vector. The resulting output p(1) is then given as input to compute p(2). p(1) = f  W  b c  , p(2) = f  W  a p1  458 (A, a= ) (B, b= ) (C, c= ) P(1), p(1)= P(2), p(2)= Standard Recursive Neural Network = f W b c = f W a p(1) Figure 2: An example tree with a simple Recursive Neural Network: The same weight matrix is replicated and used to compute all non-terminal node representations. Leaf nodes are n-dimensional vector representations of words. In order to compute a score of how plausible of a syntactic constituent a parent is the RNN uses a single-unit linear layer for all i: s(p(i)) = vT p(i), where v ∈Rn is a vector of parameters that need to be trained. This score will be used to find the highest scoring tree. For more details on how standard RNNs can be used for parsing, see Socher et al. (2011b). The standard RNN requires a single composition function to capture all types of compositions: adjectives and nouns, verbs and nouns, adverbs and adjectives, etc. Even though this function is a powerful one, we find a single neural network weight matrix cannot fully capture the richness of compositionality. Several extensions are possible: A two-layered RNN would provide more expressive power, however, it is much harder to train because the resulting neural network becomes very deep and suffers from vanishing gradient problems. Socher et al. (2012) proposed to give every single word a matrix and a vector. The matrix is then applied to the sibling node’s vector during the composition. While this results in a powerful composition function that essentially depends on the words being combined, the number of model parameters explodes and the composition functions do not capture the syntactic commonalities between similar POS tags or syntactic categories. Based on the above considerations, we propose the Compositional Vector Grammar (CVG) that conditions the composition function at each node on discrete syntactic categories extracted from a (A, a= ) (B, b= ) (C, c= ) P(1), p(1)= P(2), p(2)= Syntactically Untied Recursive Neural Network = f W(B,C) b c = f W(A,P ) a p(1) (1) Figure 3: Example of a syntactically untied RNN in which the function to compute a parent vector depends on the syntactic categories of its children which we assume are given for now. PCFG. Hence, CVGs combine discrete, syntactic rule probabilities and continuous vector compositions. The idea is that the syntactic categories of the children determine what composition function to use for computing the vector of their parents. While not perfect, a dedicated composition function for each rule RHS can well capture common composition processes such as adjective or adverb modification versus noun or clausal complementation. For instance, it could learn that an NP should be similar to its head noun and little influenced by a determiner, whereas in an adjective modification both words considerably determine the meaning of a phrase. The original RNN is parameterized by a single weight matrix W. In contrast, the CVG uses a syntactically untied RNN (SU-RNN) which has a set of such weights. The size of this set depends on the number of sibling category combinations in the PCFG. Fig. 3 shows an example SU-RNN that computes parent vectors with syntactically untied weights. The CVG computes the first parent vector via the SU-RNN: p(1) = f  W (B,C)  b c  , where W (B,C) ∈Rn×2n is now a matrix that depends on the categories of the two children. In this bottom up procedure, the score for each node consists of summing two elements: First, a single linear unit that scores the parent vector and second, the log probability of the PCFG for the rule that combines these two children: s  p(1) = v(B,C)T p(1) + log P(P1 →B C), (4) 459 where P(P1 →B C) comes from the PCFG. This can be interpreted as the log probability of a discrete-continuous rule application with the following factorization: P((P1, p1) →(B, b)(C, c)) (5) = P(p1 →b c|P1 →B C)P(P1 →B C), Note, however, that due to the continuous nature of the word vectors, the probability of such a CVG rule application is not comparable to probabilities provided by a PCFG since the latter sum to 1 for all children. Assuming that node p1 has syntactic category P1, we compute the second parent vector via: p(2) = f  W (A,P1)  a p(1)  . The score of the last parent in this trigram is computed via: s  p(2) = v(A,P1)T p(2) + log P(P2 →A P1). 3.4 Parsing with CVGs The above scores (Eq. 4) are used in the search for the correct tree for a sentence. The goodness of a tree is measured in terms of its score and the CVG score of a complete tree is the sum of the scores at each node: s(CVG(θ, x, ˆy)) = X d∈N(ˆy) s  pd . (6) The main objective function in Eq. 3 includes a maximization over all possible trees maxˆy∈Y (x). Finding the global maximum, however, cannot be done efficiently for longer sentences nor can we use dynamic programming. This is due to the fact that the vectors break the independence assumptions of the base PCFG. A (category, vector) node representation is dependent on all the words in its span and hence to find the true global optimum, we would have to compute the scores for all binary trees. For a sentence of length n, there are Catalan(n) many possible binary trees which is very large even for moderately long sentences. One could use a bottom-up beam search, keeping a k-best list at every cell of the chart, possibly for each syntactic category. This beam search inference procedure is still considerably slower than using only the simplified base PCFG, especially since it has a small state space (see next section for details). Since each probability look-up is cheap but computing SU-RNN scores requires a matrix product, we would like to reduce the number of SU-RNN score computations to only those trees that require semantic information. We note that labeled F1 of the Stanford PCFG parser on the test set is 86.17%. However, if one used an oracle to select the best tree from the top 200 trees that it produces, one could get an F1 of 95.46%. We use this knowledge to speed up inference via two bottom-up passes through the parsing chart. During the first one, we use only the base PCFG to run CKY dynamic programming through the tree. The k = 200-best parses at the top cell of the chart are calculated using the efficient algorithm of (Huang and Chiang, 2005). Then, the second pass is a beam search with the full CVG model (including the more expensive matrix multiplications of the SU-RNN). This beam search only considers phrases that appear in the top 200 parses. This is similar to a re-ranking setup but with one main difference: the SU-RNN rule score computation at each node still only has access to its child vectors, not the whole tree or other global features. This allows the second pass to be very fast. We use this setup in our experiments below. 3.5 Training SU-RNNs The full CVG model is trained in two stages. First the base PCFG is trained and its top trees are cached and then used for training the SU-RNN conditioned on the PCFG. The SU-RNN is trained using the objective in Eq. 3 and the scores as exemplified by Eq. 6. For each sentence, we use the method described above to efficiently find an approximation for the optimal tree. To minimize the objective we want to increase the scores of the correct tree’s constituents and decrease the score of those in the highest scoring incorrect tree. Derivatives are computed via backpropagation through structure (BTS) (Goller and K¨uchler, 1996). The derivative of tree i has to be taken with respect to all parameter matrices W (AB) that appear in it. The main difference between backpropagation in standard RNNs and SURNNs is that the derivatives at each node only add to the overall derivative of the specific matrix at that node. For more details on backpropagation through RNNs, see Socher et al. (2010) 460 3.6 Subgradient Methods and AdaGrad The objective function is not differentiable due to the hinge loss. Therefore, we generalize gradient ascent via the subgradient method (Ratliff et al., 2007) which computes a gradient-like direction. Let θ = (X, W (··), v(··)) ∈RM be a vector of all M model parameters, where we denote W (··) as the set of matrices that appear in the training set. The subgradient of Eq. 3 becomes: ∂J ∂θ = X i ∂s(xi, ˆymax) ∂θ −∂s(xi, yi) ∂θ + θ, where ˆymax is the tree with the highest score. To minimize the objective, we use the diagonal variant of AdaGrad (Duchi et al., 2011) with minibatches. For our parameter updates, we first define gτ ∈RM×1 to be the subgradient at time step τ and Gt = Pt τ=1 gτgT τ . The parameter update at time step t then becomes: θt = θt−1 −α (diag(Gt))−1/2 gt, (7) where α is the learning rate. Since we use the diagonal of Gt, we only have to store M values and the update becomes fast to compute: At time step t, the update for the i’th parameter θt,i is: θt,i = θt−1,i − α qPt τ=1 g2 τ,i gt,i. (8) Hence, the learning rate is adapting differently for each parameter and rare parameters get larger updates than frequently occurring parameters. This is helpful in our setting since some W matrices appear in only a few training trees. This procedure found much better optima (by ≈3% labeled F1 on the dev set), and converged more quickly than L-BFGS which we used previously in RNN training (Socher et al., 2011a). Training time is roughly 4 hours on a single machine. 3.7 Initialization of Weight Matrices In the absence of any knowledge on how to combine two categories, our prior for combining two vectors is to average them instead of performing a completely random projection. Hence, we initialize the binary W matrices with: W (··) = 0.5[In×nIn×n0n×1] + ϵ, where we include the bias in the last column and the random variable is uniformly distributed: ϵ ∼ U[−0.001, 0.001]. The first block is multiplied by the left child and the second by the right child: W (AB)   a b 1   = h W (A)W (B)bias i   a b 1   = W (A)a + W (B)b + bias. 4 Experiments We evaluate the CVG in two ways: First, by a standard parsing evaluation on Penn Treebank WSJ and then by analyzing the model errors in detail. 4.1 Cross-validating Hyperparameters We used the first 20 files of WSJ section 22 to cross-validate several model and optimization choices. The base PCFG uses simplified categories of the Stanford PCFG Parser (Klein and Manning, 2003a). We decreased the state splitting of the PCFG grammar (which helps both by making it less sparse and by reducing the number of parameters in the SU-RNN) by adding the following options to training: ‘-noRightRec dominatesV 0 -baseNP 0’. This reduces the number of states from 15,276 to 12,061 states and 602 POS tags. These include split categories, such as parent annotation categories like VPˆS. Furthermore, we ignore all category splits for the SURNN weights, resulting in 66 unary and 882 binary child pairs. Hence, the SU-RNN has 66+882 transformation matrices and scoring vectors. Note that any PCFG, including latent annotation PCFGs (Matsuzaki et al., 2005) could be used. However, since the vectors will capture lexical and semantic information, even simple base PCFGs can be substantially improved. Since the computational complexity of PCFGs depends on the number of states, a base PCFG with fewer states is much faster. Testing on the full WSJ section 22 dev set (1700 sentences) takes roughly 470 seconds with the simple base PCFG, 1320 seconds with our new CVG and 1600 seconds with the currently published Stanford factored parser. Hence, increased performance comes also with a speed improvement of approximately 20%. We fix the same regularization of λ = 10−4 for all parameters. The minibatch size was set to 20. We also cross-validated on AdaGrad’s learning rate which was eventually set to α = 0.1 and word vector size. The 25-dimensional vectors provided by Turian et al. (2010) provided the best 461 Parser dev (all) test≤40 test (all) Stanford PCFG 85.8 86.2 85.5 Stanford Factored 87.4 87.2 86.6 Factored PCFGs 89.7 90.1 89.4 Collins 87.7 SSN (Henderson) 89.4 Berkeley Parser 90.1 CVG (RNN) 85.7 85.1 85.0 CVG (SU-RNN) 91.2 91.1 90.4 Charniak-SelfTrain 91.0 Charniak-RS 92.1 Table 1: Comparison of parsers with richer state representations on the WSJ. The last line is the self-trained re-ranked Charniak parser. performance and were faster than 50-,100- or 200dimensional ones. We hypothesize that the larger word vector sizes, while capturing more semantic knowledge, result in too many SU-RNN matrix parameters to train and hence perform worse. 4.2 Results on WSJ The dev set accuracy of the best model is 90.93% labeled F1 on all sentences. This model resulted in 90.44% on the final test set (WSJ section 23). Table 1 compares our results to the two Stanford parser variants (the unlexicalized PCFG (Klein and Manning, 2003a) and the factored parser (Klein and Manning, 2003b)) and other parsers that use richer state representations: the Berkeley parser (Petrov and Klein, 2007), Collins parser (Collins, 1997), SSN: a statistical neural network parser (Henderson, 2004), Factored PCFGs (Hall and Klein, 2012), CharniakSelfTrain: the self-training approach of McClosky et al. (2006), which bootstraps and parses additional large corpora multiple times, Charniak-RS: the state of the art self-trained and discriminatively re-ranked Charniak-Johnson parser combining (Charniak, 2000; McClosky et al., 2006; Charniak and Johnson, 2005). See Kummerfeld et al. (2012) for more comparisons. We compare also to a standard RNN ‘CVG (RNN)’ and to the proposed CVG with SU-RNNs. 4.3 Model Analysis Analysis of Error Types. Table 2 shows a detailed comparison of different errors. We use the code provided by Kummerfeld et al. (2012) and compare to the previous version of the Stanford factored parser as well as to the Berkeley and Charniak-reranked-self-trained parsers (defined above). See Kummerfeld et al. (2012) for details and comparisons to other parsers. One of Error Type Stanford CVG Berkeley Char-RS PP Attach 1.02 0.79 0.82 0.60 Clause Attach 0.64 0.43 0.50 0.38 Diff Label 0.40 0.29 0.29 0.31 Mod Attach 0.37 0.27 0.27 0.25 NP Attach 0.44 0.31 0.27 0.25 Co-ord 0.39 0.32 0.38 0.23 1-Word Span 0.48 0.31 0.28 0.20 Unary 0.35 0.22 0.24 0.14 NP Int 0.28 0.19 0.18 0.14 Other 0.62 0.41 0.41 0.50 Table 2: Detailed comparison of different parsers. the largest sources of improved performance over the original Stanford factored parser is in the correct placement of PP phrases. When measuring only the F1 of parse nodes that include at least one PP child, the CVG improves the Stanford parser by 6.2% to an F1 of 77.54%. This is a 0.23 reduction in the average number of bracket errors per sentence. The ‘Other’ category includes VP, PRN and other attachments, appositives and internal structures of modifiers and QPs. Analysis of Composition Matrices. An analysis of the norms of the binary matrices reveals that the model learns a soft vectorized notion of head words: Head words are given larger weights and importance when computing the parent vector: For the matrices combining siblings with categories VP:PP, VP:NP and VP:PRT, the weights in the part of the matrix which is multiplied with the VP child vector dominates. Similarly NPs dominate DTs. Fig. 5 shows example matrices. The two strong diagonals are due to the initialization described in Sec. 3.7. Semantic Transfer for PP Attachments. In this small model analysis, we use two pairs of sentences that the original Stanford parser and the CVG did not parse correctly after training on the WSJ. We then continue to train both parsers on two similar sentences and then analyze if the parsers correctly transferred the knowledge. The training sentences are He eats spaghetti with a fork. and She eats spaghetti with pork. The very similar test sentences are He eats spaghetti with a spoon. and He eats spaghetti with meat. Initially, both parsers incorrectly attach the PP to the verb in both test sentences. After training, the CVG parses both correctly, while the factored Stanford parser incorrectly attaches both PPs to spaghetti. The CVG’s ability to transfer the correct PP attachments is due to the semantic word vector similarity between the words in the sentences. Fig. 4 shows the outputs of the two parsers. 462 (a) Stanford factored parser S NP PRP He VP VBZ eats NP NP NNS spaghetti PP IN with NP DT a NN spoon S NP PRP He VP VBZ eats NP NP NNS spaghetti PP IN with NP PRP meat (b) Compositional Vector Grammar S NP PRP He VP VBZ eats NP NNS spaghetti PP IN with NP DT a NN spoon S NP PRP He VP VBZ eats NP NP NNS spaghetti PP IN with NP NN meat Figure 4: Test sentences of semantic transfer for PP attachments. The CVG was able to transfer semantic word knowledge from two related training sentences. In contrast, the Stanford parser could not distinguish the PP attachments based on the word semantics. 10 20 30 40 50 5 10 15 20 25 −0.2 0 0.2 0.4 0.6 0.8 DT-NP 10 20 30 40 50 5 10 15 20 25 −0.4 −0.2 0 0.2 0.4 0.6 VP-NP 10 20 30 40 50 5 10 15 20 25 −0.2 0 0.2 0.4 0.6 ADJP-NP Figure 5: Three binary composition matrices showing that head words dominate the composition. The model learns to not give determiners much importance. The two diagonals show clearly the two blocks that are multiplied with the left and right children, respectively. 5 Conclusion We introduced Compositional Vector Grammars (CVGs), a parsing model that combines the speed of small-state PCFGs with the semantic richness of neural word representations and compositional phrase vectors. The compositional vectors are learned with a new syntactically untied recursive neural network. This model is linguistically more plausible since it chooses different composition functions for a parent node based on the syntactic categories of its children. The CVG obtains 90.44% labeled F1 on the full WSJ test set and is 20% faster than the previous Stanford parser. Acknowledgments We thank Percy Liang for chats about the paper. Richard is supported by a Microsoft Research PhD fellowship. The authors gratefully acknowledge the support of the Defense Advanced Research Projects Agency (DARPA) Deep Exploration and Filtering of Text (DEFT) Program under Air Force Research Laboratory (AFRL) prime contract no. FA8750-13-2-0040, and the DARPA Deep Learning program under contract number FA8650-10C-7020. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the view of DARPA, AFRL, or the US government. 463 References Y. Bengio, R. Ducharme, P. Vincent, and C. Janvin. 2003. A neural probabilistic language model. Journal of Machine Learning Research, 3:1137–1155. P. F. Brown, P. V. deSouza, R. L. Mercer, V. J. Della Pietra, and J. C. Lai. 1992. Class-based n-gram models of natural language. Computational Linguistics, 18. C. Callison-Burch. 2008. Syntactic constraints on paraphrases extracted from parallel corpora. In Proceedings of EMNLP, pages 196–205. E. Charniak and M. Johnson. 2005. Coarse-to-fine n-best parsing and maxent discriminative reranking. In ACL. E. Charniak. 2000. A maximum-entropy-inspired parser. In Proceedings of ACL, pages 132–139. M. Collins. 1997. Three generative, lexicalised models for statistical parsing. In ACL. M. Collins. 2003. Head-driven statistical models for natural language parsing. Computational Linguistics, 29(4):589–637. R. Collobert and J. Weston. 2008. A unified architecture for natural language processing: deep neural networks with multitask learning. In Proceedings of ICML, pages 160–167. F. Costa, P. Frasconi, V. Lombardo, and G. Soda. 2003. Towards incremental parsing of natural language using recursive neural networks. Applied Intelligence. J. Duchi, E. Hazan, and Y. Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. JMLR, 12, July. J. L. Elman. 1991. Distributed representations, simple recurrent networks, and grammatical structure. Machine Learning, 7(2-3):195–225. J. R. Finkel, A. Kleeman, and C. D. Manning. 2008. Efficient, feature-based, conditional random field parsing. In Proceedings of ACL, pages 959–967. D. Gildea and M. Palmer. 2002. The necessity of parsing for predicate argument recognition. In Proceedings of ACL, pages 239–246. C. Goller and A. K¨uchler. 1996. Learning taskdependent distributed representations by backpropagation through structure. In Proceedings of the International Conference on Neural Networks. J. Goodman. 1998. Parsing Inside-Out. Ph.D. thesis, MIT. D. Hall and D. Klein. 2012. Training factored pcfgs with expectation propagation. In EMNLP. J. Henderson. 2003. Neural network probability estimation for broad coverage parsing. In Proceedings of EACL. J. Henderson. 2004. Discriminative training of a neural network statistical parser. In ACL. Liang Huang and David Chiang. 2005. Better k-best parsing. In Proceedings of the 9th International Workshop on Parsing Technologies (IWPT 2005). E. H. Huang, R. Socher, C. D. Manning, and A. Y. Ng. 2012. Improving Word Representations via Global Context and Multiple Word Prototypes. In ACL. D. Kartsaklis, M. Sadrzadeh, and S. Pulman. 2012. A unified sentence space for categorical distributionalcompositional semantics: Theory and experiments. Proceedings of 24th International Conference on Computational Linguistics (COLING): Posters. D. Klein and C. D. Manning. 2003a. Accurate unlexicalized parsing. In Proceedings of ACL, pages 423–430. D. Klein and C.D. Manning. 2003b. Fast exact inference with a factored model for natural language parsing. In NIPS. J. K. Kummerfeld, D. Hall, J. R. Curran, and D. Klein. 2012. Parser showdown at the wall street corral: An empirical investigation of error types in parser output. In EMNLP. Q. V. Le, J. Ngiam, Z. Chen, D. Chia, P. W. Koh, and A. Y. Ng. 2010. Tiled convolutional neural networks. In NIPS. T. Matsuzaki, Y. Miyao, and J. Tsujii. 2005. Probabilistic cfg with latent annotations. In ACL. D. McClosky, E. Charniak, and M. Johnson. 2006. Effective self-training for parsing. In NAACL. S. Menchetti, F. Costa, P. Frasconi, and M. Pontil. 2005. Wide coverage natural language processing using kernel methods and neural networks for structured data. Pattern Recognition Letters, 26(12):1896–1906. T. Mikolov, W. Yih, and G. Zweig. 2013. Linguistic regularities in continuous spaceword representations. In HLT-NAACL. S. Petrov and D. Klein. 2007. Improved inference for unlexicalized parsing. In NAACL. S. Petrov, L. Barrett, R. Thibaux, and D. Klein. 2006. Learning accurate, compact, and interpretable tree annotation. In Proceedings of ACL, pages 433–440. N. Ratliff, J. A. Bagnell, and M. Zinkevich. 2007. (Online) subgradient methods for structured prediction. In Eleventh International Conference on Artificial Intelligence and Statistics (AIStats). R. Socher, C. D. Manning, and A. Y. Ng. 2010. Learning continuous phrase representations and syntactic parsing with recursive neural networks. In Proceedings of the NIPS-2010 Deep Learning and Unsupervised Feature Learning Workshop. 464 R. Socher, E. H. Huang, J. Pennington, A. Y. Ng, and C. D. Manning. 2011a. Dynamic Pooling and Unfolding Recursive Autoencoders for Paraphrase Detection. In NIPS. MIT Press. R. Socher, C. Lin, A. Y. Ng, and C.D. Manning. 2011b. Parsing Natural Scenes and Natural Language with Recursive Neural Networks. In ICML. R. Socher, B. Huval, C. D. Manning, and A. Y. Ng. 2012. Semantic Compositionality Through Recursive Matrix-Vector Spaces. In EMNLP. B. Taskar, D. Klein, M. Collins, D. Koller, and C. Manning. 2004. Max-margin parsing. In Proceedings of EMNLP, pages 1–8. I. Titov and J. Henderson. 2006. Porting statistical parsers with data-defined kernels. In CoNLL-X. I. Titov and J. Henderson. 2007. Constituent parsing with incremental sigmoid belief networks. In ACL. J. Turian, L. Ratinov, and Y. Bengio. 2010. Word representations: a simple and general method for semisupervised learning. In Proceedings of ACL, pages 384–394. P. D. Turney and P. Pantel. 2010. From frequency to meaning: Vector space models of semantics. Journal of Artificial Intelligence Research, 37:141–188. 465
2013
45
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 466–475, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Discriminative state tracking for spoken dialog systems Angeliki Metallinou1∗, Dan Bohus2, and Jason D. Williams2 1University of Southern California, Los Angeles, CA, USA 2Microsoft Research, Redmond, WA, USA [email protected] [email protected] [email protected] Abstract In spoken dialog systems, statistical state tracking aims to improve robustness to speech recognition errors by tracking a posterior distribution over hidden dialog states. Current approaches based on generative or discriminative models have different but important shortcomings that limit their accuracy. In this paper we discuss these limitations and introduce a new approach for discriminative state tracking that overcomes them by leveraging the problem structure. An offline evaluation with dialog data collected from real users shows improvements in both state tracking accuracy and the quality of the posterior probabilities. Features that encode speech recognition error patterns are particularly helpful, and training requires relatively few dialogs. 1 Introduction Spoken dialog systems interact with users via natural language to help them achieve a goal. As the interaction progresses, the dialog manager maintains a representation of the state of the dialog in a process called dialog state tracking. For example, in a bus schedule information system, the dialog state might indicate the user’s desired bus route, origin, and destination. Dialog state tracking is difficult because automatic speech recognition (ASR) and spoken language understanding (SLU) errors are common, and can cause the system to misunderstand the user’s needs. At the same time, state tracking is crucial because the system relies on the estimated dialog state to choose actions – for example, which bus schedule information to present to the user. The dialog state tracking problem can be formalized as follows (Figure 1). Each system turn in the dialog is one datapoint. For each datapoint, the input consists of three items: a set of K features that describes the current dialog context, G dialog state hypotheses, and for each dialog state hypothesis, M features that describe that dialog state hypothesis. The task is to assign a probability distribution over the G dialog state hypotheses, plus a meta-hypothesis which indicates that none of the G hypotheses is correct. Note that G varies across turns (datapoints) – for example, in the first turn of Figure 1, G = 3, and in the second and third turns G = 5. Also note that the dialog state tracker is not predicting the contents of the dialog state hypotheses; the dialog state hypotheses contents are given by some external process, and the task is to predict a probability distribution over them, where the probability assigned to a hypothesis indicates the probability that it is correct. It is a requirement that the G hypotheses are disjoint; with the special “everything else” meta-hypothesis, exactly one hypothesis is correct by construction. After the dialog state tracker has output its distribution, this distribution is passed to a separate, downstream process that chooses what action to take next (e.g., how to respond to the user). Dialog state tracking can be seen an analogous to assigning a probability distribution over items on an ASR N-best list given speech input and the recognition output, including the contents of the N-best list. In this task, the general features describe the recognition overall (such as length of utterance), and the hypothesis-specific features describe each N-best entry (such as decoder cost). ∗Work done while at Microsoft Research 466 Another analogous task is assigning a probability distribution over a set of URLs given a search query and the URLs. Here, general features describe the whole set of results, e.g., number of words in the query, and hypothesis-specific features describe each URL, e.g., the fraction of query words contained in page. For dialog state tracking, most commercial systems use hand-crafted heuristics, selecting the SLU result with the highest confidence score, and discarding alternatives. In contrast, statistical approaches compute a posterior distribution over many hypotheses for the dialog state. The key insight is that dialog is a temporal process in which correlations between turns can be harnessed to overcome SLU errors. Statistical state tracking has been shown to improve task completion in end-to-end spoken dialog systems (Bohus and Rudnicky (2006); Young et al. (2010); Thomson and Young (2010)). Two types of statistical state tracking approaches have been proposed. Generative approaches (Horvitz and Paek (1999); Williams and Young (2007); Young et al. (2010); Thomson and Young (2010)) use generative models that capture how the SLU results are generated from hidden dialog states. These models can be used to track an arbitrary number of state hypotheses, but cannot easily incorporate large sets of potentially informative features (e.g. from ASR, SLU, dialog history), resulting in poor probability estimates. As an illustration, in Figure 1, a generative model might fail to assign the highest score to the correct hypothesis (61C) after the second turn. In contrast, discriminative approaches use conditional models, trained in a discriminative fashion (Bohus and Rudnicky (2006)) to directly estimate the distribution over a set of state hypotheses based on a large set of informative features. They generally produce more accurate distributions, but in their current form they can only track a handful of state hypotheses. As a result, the correct hypothesis may be discarded: for instance, in Figure 1, a discriminative model might consider only the top 2 SLU results, and thus fail to consider the correct 61C hypothesis at all. The main contribution of this paper is to develop a new discriminative model for dialog state tracking that can operate over an arbitrary number of hypotheses and still compute accurate probability estimates. We also explore the relative importance of different feature sets for this task, and measure the amount of data required to reliably train our model. 2 Data and experimental design We use data from the public deployment of two systems in the Spoken Dialog Challenge (Black et al. (2010)) which provide bus schedule information for Pittsburgh, USA. The systems, DS1 and DS2, were fielded by AT&T, and are described in Williams et al. (2010) and Williams (2012). Both systems followed a highly directed flow, separately collecting 5 slots. All users were asked for their bus route, origin, and destination; then, they were optionally prompted for a date and time. Each slot was explicitly or implicitly confirmed before collecting the next. At the end, bus times were presented. The two systems differed in acoustic models, confidence scoring model, state tracking method and parameters, number of supported routes (8 vs 40, for DS1 and DS2 respectively), presence of minor bugs, and user population. These differences yield distinctions in the distributions in the two corpora (Williams (2012)). In both systems, a dialog state hypothesis consists of a value of the user’s goal for a certain slot: for example, a state hypothesis for the origin slot might be “carnegie mellon university”. The number G of state hypotheses (e.g. slot values) observed so far depends on the dialog, and turn within that dialog. For instance, in Fig. 1, G progressively takes values 3, 5 and 5. Dialog state hypotheses with identical contents (e.g., the same bus route) are merged. The correctness of the SLU results was manually labeled by professional annotators. 2.1 Experimental setup To perform a comparative analysis of various state tracking algorithms, we test them offline, i.e., by re-running state tracking against the SLU results from deployment. However, care must be taken: when the improved state-tracker is installed into a dialog system and used to drive action selection, the distribution of the resulting dialog data (which is an input for the state tracker) will change. In other words, it is known a priori that the train and test distributions will be mismatched. Hence, when conducting offline experiments, if train and test data were drawn from the same matched distribution, this may overstate performance. 467 Figure 1: Overview of dialog state tracking. In this example, the dialog state contains the user’s desired bus route. At each turn, the system produces a spoken output. The user’s spoken response is processed to extract a set of spoken language understanding (SLU) results, each with a local confidence score. A set of G dialog state hypotheses is formed by considering all SLU results observed so far, including the current turn and all previous turns. For each state hypothesis, a feature extractor produces a set of M hypothesis-specific features, plus a single set of K general features that describes the current dialog context. The dialog state tracker uses these features to produce a distribution over the G state hypotheses, plus a meta-hypothesis rest which accounts for the possibility that none of the G hypotheses are correct. dataset train set test set MATCH1 half calls from DS2 remaining calls in DS2 MATCH2 half calls from DS1, half from DS2 remaining calls from DS1 and DS2 MISMATCH all calls from DS1 all calls from DS2 Table 1: Train-test data splits To account for this effect, we explicitly study train/test mismatch through three partitions of data from DS1 and DS2 (see Table 1): MATCH1 contains matched train/test data from the DS2 dataset; MATCH2 contains matched train/test data from both datasets; finally, MISMATCH contains mismatched train/test data. While the MISMATCH condition may not identically replicate the mismatch observed from deploying a new state tracker online (since online characteristics depend on user behavior) training on DS1 and testing on DS2 at least ensures the presence of some real-world mismatch. We assess performance via two metrics: accuracy and L2 norm. Accuracy indicates whether the state hypothesis with the highest assigned probability is correct, where rest is correct iff none of the SLU results prior to the current turn include the user’s goal. High accuracy is important as a dialog system must ultimately commit to a single interpretation of the user’s needs – e.g., it must commit to a route in order to provide bus timetable information. In addition, the L2 norm (or Brier score, Murphy (1973)) also captures how well calibrated the output probabilities are, which is crucial to decision theoretic methods for action selection. The L2 norm is computed between the output posterior and the ground-truth vector, which has 1 in the position of the correct item and 0 elsewhere. Both metrics are computed for each slot in each turn, and reported by averaging across all turns and slots. 468 2.2 Hand-crafted baseline state tracker As a baseline, we construct a hand-crafted state tracking rule that follows a strategy common in commercial systems: it returns the SLU result with the maximum confidence score, ignoring all other hypotheses. Although this is very a simple rule, it is very often effective. For example, if the user says “no” to an explicit confirmation or “go back” to an implicit confirmation, they are asked the same question again, which gives an opportunity for a higher confidence score. Of the G possible hypotheses for a slot, we denote the number actually assigned a score by a model as ˜G, so in this heuristic baseline ˜G = 1. The performance of this baseline (BASELINE in Table 3) is relatively strong because the top SLU result is by far most likely to be correct, and because the confidence score was already trained with slot-specific speech data (Williams and Balakrishnan (2009), Williams (2012)). However, this simple rule can’t make use of SLU results on the N-best list, or statistical priors; these limitations motivate the use of statistical state trackers, introduced next. 3 Generative state tracking Generative state tracking approaches leverage models that describe how SLU results are generated from a hidden dialog state, denoted g. The user’s true (unobserved) action u is conditioned on g and the system action a via a user action model P(u|g, a), and also on the observed SLU result ˜u via a model of how SLU results are generated P(˜u|u). Given a prior distribution b(g) and a result ˜u, an updated distribution b′(g) can be computed by summing over all hidden user actions u: b′(g) = η X u P(˜u|u) · P(u|g, a)b(g) (1) where η is a normalizing constant (Williams and Young (2007)). Generative approaches model the posterior over all possible dialog state hypotheses, including those not observed in the SLU N-best lists. In general this is computationally intractable because the number of states is too large. One approach to scaling up is to group g into a few partitions, and to track only states suggested by observed SLU results (Young et al. (2010); Williams (2010); Gaˇsi´c and Young (2011)). Another approach is to factor the components of a dialog state, make assumptions about conditional independence between the components, and apply approximate inference techniques such as loopy belief propagation (Thomson and Young (2010)). In deployment, DS1 and DS2 used the AT&T Statistical Dialog Toolkit (ASDT) for dialog state tracking (Williams (2010); AT&T Statistical Dialog Toolkit). ASDT implements a generative update of the form of Eq 1, and uses partitions to maintain tractability. Component models were learned from dialog data from a different dialog system. A maximum of ˜G = 20 state hypotheses were tracked for each slot. The performance (GENONLINE in Table 3), was worse than BASELINE: an in-depth analysis attributed this to the mismatch between train and test data in the component models, and to the underlying flawed assumption of eq. 1 that observations at different turns are independent conditioned on the dialog state – in practice, confusions made by speech recognition are highly correlated (Williams (2012)). For all datasets, we re-estimated the models on the train set and re-ran generative tracking with an unlimited number of partitions (i.e., ˜G = G); see GENOFFLINE in Table 3. The re-estimated tracker improved accuracy in MATCH conditions, but degraded accuracy in the MISMATCH condition. This can be partly attributed to the difficulty in estimating accurate initial priors b(g) for MISMATCH, where the bus route, origin, and destination slot values in train and test systems differed significantly. 4 Discriminative State Tracking: Preliminaries and existing work In contrast to generative models, discriminative approaches to dialog state tracking directly predict the correct state hypothesis by leveraging discriminatively trained conditional models of the form b(g) = P(g|f), where f are features extracted from various sources, e.g. ASR, SLU, dialog history, etc. In this work we will use maximum entropy models. We begin by briefly introducing these models in the next subsection. We then describe the features used, and finally review existing discriminative approaches for state tracking which serve as a starting point for the new approach we introduce in Section 5. 469 4.1 Maximum entropy models The maximum entropy framework (Berger et al. (1996)) models the conditional probability distribution of the label y given features x, p(y|x) via an exponential model of the form: P(y|x, λ) = exp(P i∈I λiφi(x, y)) P y∈Y exp(P i∈I λiφi(x, y)) (2) where φi(x, y) are feature functions jointly defined on features and labels, and λi are the model parameters. The training procedure optimizes the parameters λi to maximize the likelihood over the data instances subject to regularization penalties. In this work, we optimize the L1 penalty using a cross-validation process on the train set, and we use a fixed L2 penalty based on heuristic based on the dataset size. The same optimization is used for all models. 4.2 Features Discriminative approaches for state tracking rely on informative features to predict the correct dialog state. In this work we designed a set of hypothesis-specific features that convey information about the correctness of a particular state hypothesis, and a set of general features that convey information about the correctness of the rest metahypothesis. Hypothesis-specific features can be grouped into 3 categories: base, history and confusion features. Base features consider information about the current turn, including rank of the current SLU result (current hypothesis), the SLU result confidence score(s) in the current N-best list, the difference between the current hypothesis score and the best hypothesis score in the current N-best list, etc. History features contain additional useful information about past turns. Those include the number of times an SLU result has been observed before, the number of times an SLU result has been observed before at a specific rank such as rank 1, the sum and average of confidence scores of SLU results across all past recognitions, the number of possible past user negations or confirmations of the current SLU result etc. Confusion features provide information about likely ASR errors and confusability. Some recognition results are more likely to be incorrect than others – background noise tends to trigger certain results, especially short bus routes like “p”. Moreover, similar sounding phrases are more likely to be confused. The confusion features were computed on a subset of the training data. For each SLU result we computed the fraction of the time that the result was correct, and the binomial 95% confidence interval for that estimate. Those two statistics were pre-computed for all SLU results in the training data subset, and were stored in a lookup table. At runtime, when an SLU hypothesis is recognized, its statistics from this lookup table are used as features. Similar statistics were computed for prior probability of an SLU result appearing on an N-best list, and prior probability of SLU result appearance at specific rank positions of an N-best list, prior probability of confusion between pairs of SLU results, and others. General features provide aggregate information about dialog history and SLU results, and are shared across different SLU results of an N-best list. For example, from the current turn, we use the number of distinct SLU results, the entropy of the confidence scores, the best path score of the word confusion network, etc. We also include features that contain aggregate information about the sequence of all N-best lists up to the current turn, such as the mean and variance of N-best list lengths, the number of distinct SLU results observed so far, the entropy of their corresponding confidence scores, and others. We denote the number of hypothesis-specific features as M, and the number of general features as K. K and M are each in the range of 100−200, although M varies depending on whether history and confusion features are included. For a given dialog turn with G state hypotheses, there are a total of G ∗M + K distinct features. 4.3 Fixed-length discriminative state tracking In past work, Bohus and Rudnicky (2006) introduced discriminative state tracking, casting the problem as standard multiclass classification. In this setup, each turn constitutes one data instance. Since in dialog state tracking the number of state hypotheses varies across turns, Bohus and Rudnicky (2006) chose a subset of ˜G state hypotheses to score. In this work we used a similar setup, where we considered the top G1 SLU results from the current N-best list at turn t, and the top G2 and G3 SLU results from the previous Nbest lists at turns t −1 and t −2. The problem can then be formulated as multiclass classification 470 over ˜G+1 = G1+G2+G3+1 classes, where the correct class indicates which of these hypotheses (or rest) is correct. We experimented with different values and found that G1 = 3, G2 = 2, and G3 = 1 ( ˜G = 6) yielded the best performance. Feature functions are defined in the standard way, with one feature function φ and weight λ for each (feature,class) pair. Formally, φ of eq. 2 is defined as φi,j(x, y) = xiδ(y, j), where δ(y, j) = 1 if y = j and 0 otherwise. i indexes over the ˜GM + K features and j over the ˜G + 1 classes.1 The two-dimensional subscript i, j if used for clarity of notation, but is otherwise identical in role to the one-dimension subscript i in Eq 2. Figure 2 illustrates the relationship between hypotheses and weights. Results are reported as DISCFIXED in Table 3. In the MATCH conditions, performance is generally higher than the other baselines, particularly when confusion features are included. In the MISMATCH condition, performance is worse that the BASELINE. A strength of this approach is that it enables features from every hypothesis to independently affect every class. However, the total number of feature functions (hence weights to learn) is ( ˜G + 1) × ( ˜GM + K), which increases quadratically with the number of hypotheses considered ˜G. Although regularization can help avoid overfitting per se, it becomes a more challenging task with more features. Learning weights for each (feature,class) pair has the drawback that the effect of hypothesis-specific features such as confidence have to be learned separately for every hypothesis. Also, although we know in advance that posteriors for a dialog state hypothesis are most dependent on the features corresponding to that hypothesis, in this approach the features from all hypotheses are pooled together and the model is left to discover these correspondences via learning. Furthermore, items lower down on the SLU N-best list are much less likely to be correct: an item at a very deep position (say 19) might never be correct in the training data – when this occurs, it is unreasonable to expect posteriors to be estimated accurately. As a result of these issues, in practice ˜G is limited to being a small number – here we found that increasing ˜G > 6 degraded performance. Yet with 1Although in practice, maximum entropy model constraints render weights for one class redundant. ˜G = 6, we found that in 10% of turns, the correct state hypothesis was present but was being discarded by the model, which substantially reduces the upper-bound on tracker performance. In the next section, we introduce a novel discriminative state tracking approach that addresses the above limitations, and enables jointly considering an arbitrary number of state hypotheses, by exploiting the structure inherent in the dialog state tracking problem. 5 Dynamic discriminative state tracking The key idea in the proposed approach is to use feature functions that link hypothesis-specific features to their corresponding dialog state hypothesis. This approach makes it straightforward to model relationships such as “higher confidence for an SLU result increases the probability of its corresponding state hypothesis being correct”. This formulation also decouples the number of models parameters (i.e. weights to learn) from the number of hypotheses considered, allowing an arbitrary number of dialog states hypotheses to be scored. Figure 2: The DISCFIXED model is a traditional maximum entropy model for classification. Every feature in every hypothesis is linked to every hypothesis, requiring ( ˜G + 1)( ˜GM + K) weights. We begin by re-stating how features are indexed. Recall each dialog state hypothesis has M hypothesis-specific features; for each hypothesis, we concatenate these M features with the K general features, which are identical for all hypotheses. For the meta-hypothesis rest, we again concatenate M+K features, where the M hypothesisspecific features take special undefined values. We write xg i to refer to the ith feature of hypothesis g, where i ranges from 1 to M + K and g from 1 to G + 1. 471 Figure 3: The DISCDYN model presented in this paper exploits the structure of the state tracking problem. Features are linked to only their own hypothesis, and weights are shared across all hypotheses, requiring M + K weights. algorithm description BASELINE simple hand-crafted rule GENONLINE generative update, in deployed system GENOFFLINE generative update, re-trained and run offline DISCFIXED discr. fixed size multiclass (7 classes) DISCDYN1 discr. joint dynamic estimation DISCDYN2 discr. joint dynamic estimation, using indicator encoding of ordinal features DISCDYN3 discr. joint dynamic estimation, using indicator encoding and ordinal-ordinal conjunctions DISCIND discr. separate estimation Table 2: Description of the various implemented state tracking algorithms The model is based on M + K feature functions. However, unlike in traditional maximum entropy models such as the fixed-position model above, these features functions are dynamically defined when presented with each turn. Specifically, for a turn with G hypotheses, we define φi(x, y = g) = xg i , where y ranges over the set of possible dialog states G + 1 (and as above i ∈1 . . . M + K). The feature function φi is dynamic in that the domain of y – i.e., the number of dialog state hypotheses to score – varies from turn to turn. With feature functions defined this way, standard maximum entropy optimization is then applied to learn the corresponding set of M + K weights, denoted λi. Fig. 3 shows the relationship of hypotheses and weights. In practice, this formulation – in which general features are duplicated across every dialog state hypothesis – may require some additional feature engineering: for every hypothesis g and general feature i, the value of that general feature xg i will be multiplied by the same weight λi. The result is that any setting of λi affects all scores identically, with no net change to the resulting posterior. Nonetheless, general features do contain useful information for state tracking; to make use of them, we add conjunctions (combinations) of general and hypothesis-specific features. We use 3 different feature variants. In DISCDYN1, we use the original feature set, ignoring the problem described above (so that the general features contribute no information), resulting in M + K weights. DISCDYN2 adds indicator encodings of the ordinal-valued hypothesisspecific features. For example, rank is encoded as a vector of boolean indicators, where the first indicator is nonzero if rank = 1, the second is nonzero if rank = 2, and the third if rank ≥ 3. This provides a more detailed encoding of the ordinal-valued hypothesis-specific features, although it still ignores information from the general features. This encoding increases the number of weights to learn to about 2(M + K). Finally, DISCDYN3 extends DISCDYN2 by including conjunctions of the ordinal-valued general features with ordinal-valued hypothesis-specific features. For example, if the 3-way hypothesisspecific indicator feature for rank described above were conjoined with a 4-way general indicator feature for dialog state, the result would be an indicator of dimension 3 × 4 = 12. This expansion results in approximately 10(M + K) weights to learn in DISCDYN3.2 For comparison, we also estimated a simpler alternative model, called DISCIND. This model consists of 2 binary classifiers: the first one scores each hypothesis in isolation, using the M hypothesis-specific features for that hypothesis + the K general features for that turn, and outputs a (single) probability that the hypothesis is correct. For this classifier, each hypothesis (not each turn) defines a data instance. The second binary classifier takes the K general features, and outputs a probability that the rest meta-hypothesis is correct. For this second classifier, each turn defines one data instance. The output of these two models is then calibrated with isotonic regression (Zadrozny and Elkan (2002)) and normalized to generate the posterior over all hypotheses. 2We explored adding all possible conjunctions, including real-valued features, but this increased memory and computational requirements dramatically without performance gains. 472 Metric Accuracy (larger numbers better) L2 (smaller numbers better) Dataset MATCH1 MATCH2 MISMATCH MATCH1 MATCH2 MISMATCH Features b bc bch b bc bch b bc bch b bc bch b bc bch b bc bch BASELINE 61.5 61.5 61.5 63.4 63.4 63.4 62.5 62.5 62.5 27.1 27.1 27.1 25.5 25.5 25.5 27.3 27.3 27.3 GENONLINE 54.4 54.4 54.4 55.8 55.8 55.8 54.8 54.8 54.8 34.8 34.8 34.8 32.0 32.0 32.0 34.8 34.8 34.8 GENOFFLINE 57.1 57.1 57.1 60.1 60.1 60.1 51.8 51.8 51.8 37.6 37.6 37.6 33.4 33.4 33.4 42.0 42.0 42.0 DISCFIXED 61.9 66.7 65.3 63.6 69.7 68.8 59.1 61.9 59.3 27.2 23.6 24.4 25.8 21.9 22.4 28.9 27.8 27.8 DISCDYN1 62.0 70.9 71.1 64.4 72.4 72.9 59.4 61.8 62.3 26.3 21.3 20.9 25.0 20.4 20.1 27.7 26.3 25.9 DISCDYN2 62.6 71.3 71.5 65.7 72.1 72.2 61.9 63.2 63.1 26.3 21.4 21.2 24.4 20.5 20.4 26.9 25.8 25.4 DISCDYN3 63.6 70.1 70.9 65.9 72.1 70.7 60.7 62.1 62.9 26.2 21.5 21.4 24.3 20.6 20.7 27.1 25.9 26.1 DISCIND 62.4 69.8 70.5 63.4 71.5 71.8 59.9 63.3 62.2 26.7 23.3 22.5 25.7 21.8 20.7 28.4 27.3 28.8 Table 3: Performance of the different algorithms on each dataset using three feature combinations. Base features are denoted as b, ASR/SLU confusion features as c and history features as h. Performance for the feature combinations bh is omitted for space; it is between b and bc. 6 Results and discussion The implemented state tracking methods are summarized in Table 2, and our results are presented in Table 3. These results suggest several conclusions. First, discriminative approaches for state tracking broadly outperform generative methods. Since discriminative methods incorporate many features and are trained directly to optimize performance, this is perhaps unsurprising for the MATCH conditions. It is interesting that discriminative methods are also superior in the more realistic MISMATCH setting, albeit with smaller gains. This result suggests that discriminative methods have good promise when deployed into real systems, where mismatch between training and test distributions is expected. Second, the dynamic discriminative DISCDYN models also outperformed the fixed-length discriminative methods. This shows the benefit of a model which can score every dialog state hypotheses, rather than a fixed subset. Third, the three variants of the DISCDYN model, which progressively contain more detailed feature encoding and conjunctions, perform similarly. This suggests that a relatively simple encoding is sufficient to achieve good performance, as the feature indicators and conjunctions present in DISCDYN2 and DISCDYN3 give only a small additional increase. Among the discriminative models, the jointlyoptimized DISCDYN versions also slightly outperform the simpler, independently-optimized DISCIND version. This is to be expected, for two reasons: first, DISCIND is trained on a per-hypothesis basis, while the DISCDYN models are trained on a per-turn basis, which is the true performance metric. For example, some turns have 1 hypothesis and others have 100, but DISCIND training counts all hypotheses equally. Second, model parameters in DISCIND are trained independently of competing hypotheses. However, they should rather be adjusted specifically so that the correct item receives a larger score than incorrect items – not merely to increase scores for correct items and decrease scores for incorrect items in isolation – and this is what is done in the DISCDYN models. The analysis of various feature sets indicates that the ASR/SLU error correlation (confusion) features yield the largest improvement – c.f. feature set bc compared to b in Table 3. The improvement is smallest for MISMATCH, which underscores the challenges of mismatched train and test conditions during a realistic runtime scenario. Note, however, that we have constructed a highly mismatched case where we train on DS1 (that supports just 8 routes) and test on DS2 (that supports 40 routes). Therefore, many route, origin and destination slot values in the test data do not appear in the training data. Hence, it is unsurprising that the positive effect of confusion features would decrease. While Table 3 shows performance measures averaged across all turns, Table 4 breaks down performance measures by slot, using the full feature set bch and the realistic MISMATCH dataset. Results here show a large variation in performance across the different slots. For the date and time slots, there is an order of magnitude less data than for the other slots; however performance for dates is quite good, whereas times is rather poor. We believe this is because the SLU confusion features can be estimated well for slots with small cardinalities (there are 7 possible values for the day), and less well for slots with large cardinalities (there are 24 × 60 = 1440 possible time values). This sug473 Accuracy (larger numbers better) algorithms rout origin dest. date time BASELINE 53.81 66.49 67.78 71.88 52.32 GENONLINE 50.02 54.11 59.05 75.78 35.02 GENOFFLINE 48.12 58.82 58.98 72.66 20.25 DISCFIXED 52.83 67.81 70.67 71.88 33.34 DISCDYN1 54.28 68.24 68.53 79.69 40.51 DISCDYN2 56.18 68.42 70.10 80.47 40.51 DISCDYN3 54.52 66.24 67.96 82.81 43.04 DISCIND 54.25 68.84 70.79 78.13 38.82 L2 metric (smaller numbers better) algorithms route origin dest. date time BASELINE 33.15 24.67 24.68 21.61 32.35 GENONLINE 35.50 35.10 31.13 19.86 52.58 GENOFFLINE 46.42 35.73 37.76 19.97 70.30 DISCFIXED 34.09 23.92 23.35 17.59 40.15 DISCDYN1 31.30 23.01 23.07 15.29 37.02 DISCDYN2 30.53 22.40 22.74 13.58 37.59 DISCDYN3 31.58 23.86 23.68 13.93 37.52 DISCIND 36.50 23.45 23.41 15.20 45.43 Table 4: Performance per slot on dataset MISMATCH using the full feature set bch. (a) MISMATCH dataset (b) MATCH2 dataset Figure 4: Accuracy vs. amount of training data gests that the amount of data required to estimate a good model may depend on the cardinality of slot values. Finally, in Figure 4 we show how performance varies with different amounts of training data for the MATCH2 and MISMATCH datasets, where the full training set size is approximately 5600 and 4400 turns, respectively. In both cases asymptotic performance is reached after about 2000 turns, or about 150 dialogs. This is particularly encouraging, as it suggests models could be learned or adapted online with relatively little data, or could even be individually tailored to particular users. 7 Conclusion and Future Work Dialog state tracking is crucial to the successful operation of spoken dialog systems. Recently developed statistical approaches are promising as they fully utilize the dialog history, and can incorporate priors from past usage data. However, existing methodologies are either limited in their accuracy or their coverage, both of which hamper performance. In this paper, we have introduced a new model for discriminative state tracking. The key idea is to exploit the structure of the problem, in which each dialog state hypothesis has features drawn from the same set. In contrast to past approaches to discriminative state tracking which required a number of parameters quadratic in the number of state hypotheses, our approach uses a constant number of parameters, invariant to the number of state hypotheses. This is a crucial property that enables generalization and dealing with an unlimited number of hypotheses, overcoming a key limitation in previous models. We evaluated the proposed method and compared it to existing generative and discriminative approaches on a corpus of real-world humancomputer dialogs chosen to include a mismatch between training and test, as this will be found in deployments. Results show that the proposed model exceeds both the accuracy and probability quality of all baselines when using the richest feature set, which includes information about common ASR confusions and dialog history. The model can be trained efficiently, i.e. only about 150 training dialogs are necessary. The next step is to incorporate this approach into a deployed dialog system, and use the estimated posterior over dialog states as input to the action selection process. In future, we also hope to explore unsupervised online adaptation, where the trained model can be updated as test data is processed. Acknowledgments We thank Patrick Nguyen for helpful discussions regarding maximum entropy modeling and feature functions for handling structured and dynamic output classification problems. References AT&T Statistical Dialog Toolkit. AT&T Statistical Dialog Toolkit. http://www2.research. att.com/sw/tools/asdt/, 2013. Adam Berger, Vincent J. Della Pietra, and Stephen A. Della Pietra. A maximum entropy approach to natural language processing. Computational Linguistics, 22:39–71, 1996. 474 Alan W. Black, S. Burger, B. Langner, G. Parent, and M. Eskenazi. Spoken dialog challenge 2010. In Proc. of Workshop on Spoken Language Technologies (SLT), 2010. Dan Bohus and Alex Rudnicky. A k hypotheses + other belief updating model. In Proc. of AAAI Workshop on Statistical and Empirical Approaches to Spoken Dialog Systems, 2006. Milica Gaˇsi´c and Steve Young. Effective handling of dialogue state in the hidden information state pomdp dialogue manager. ACM Transactions on Speech and Language Processing, 7, 2011. Eric Horvitz and Tim Paek. A computational architecture for conversation. In Proc. of the 7th Intl. Conf. on User Modeling, 1999. Allan H Murphy. A new vector partition of the probability score. Journal of Applied Meteorology, 12:595–600, 1973. Blaise Thomson and Steve Young. Bayesian update of dialogue state: A POMDP framework for spoken dialogue systems. Computer Speech and Language, 24(4):562–588, 2010. Jason D. Williams. Incremental partition recombination for efficient tracking of multiple dialogue states. In Proc. of ICASSP, 2010. Jason D. Williams. Challenges and opportunities for state tracking in statistical spoken dialog systems: Results from two public deployments. IEEE Journal of Selected Topics in Signal Processing, Special Issue on Advances in Spoken Dialogue Systems and Mobile Interface, 6(8): 959–970, 2012. Jason D. Williams and Suhrid Balakrishnan. Estimating probability of correctness for asr n-best lists. In Proc. SigDial Conference, 2009. Jason D. Williams and Steve Young. Partially observable markov decision processes for spoken dialog systems. Computer Speech and Language, 21:393–422, 2007. Jason D. Williams, Iker Arizmendi, and Alistair Conkie. Demonstration of AT&T Let’s Go: A production-grade statistical spoken dialog system. In Proc of Workshop on Spoken Language Technologies (SLT), 2010. Steve Young, Milica Gaˇsi´c, Simon Keizer, Franc¸ois Mairesse, Jost Schatzmann, Blaise Thomson, and Kai Yu. The hidden information state model: a practical framework for POMDP-based spoken dialogue management. Computer Speech and Language, 24(2):150– 174, 2010. Bianca Zadrozny and Charles Elkan. Transforming classifier scores into accurate multiclass probability estimates. In Proc. of the eighth ACM SIGKDD Intl. Conf on Knowledge Discovery and Data mining, pages 694–699, 2002. 475
2013
46
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 476–485, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Leveraging Synthetic Discourse Data via Multi-task Learning for Implicit Discourse Relation Recognition Man Lan and Yu Xu Department of Computer Science and Technology East China Normal University Shanghai, P.R.China [email protected] [email protected] Zheng-Yu Niu Baidu Inc. Beijing, P.R.China [email protected] Abstract To overcome the shortage of labeled data for implicit discourse relation recognition, previous works attempted to automatically generate training data by removing explicit discourse connectives from sentences and then built models on these synthetic implicit examples. However, a previous study (Sporleder and Lascarides, 2008) showed that models trained on these synthetic data do not generalize very well to natural (i.e. genuine) implicit discourse data. In this work we revisit this issue and present a multi-task learning based system which can effectively use synthetic data for implicit discourse relation recognition. Results on PDTB data show that under the multi-task learning framework our models with the use of the prediction of explicit discourse connectives as auxiliary learning tasks, can achieve an averaged F1 improvement of 5.86% over baseline models. 1 Introduction The task of implicit discourse relation recognition is to identify the type of discourse relation (a.k.a. rhetorical relation) hold between two spans of text, where there is no discourse connective (a.k.a. discourse marker, e.g., but, and) in context to explicitly mark their discourse relation (e.g., Contrast or Explanation). It can be of great benefit to many downstream NLP applications, such as question answering (QA) (Verberne et al., 2007), information extraction (IE) (Cimiano et al., 2005), and machine translation (MT), etc. This task is quite challenging due to two reasons. First, without discourse connective in text, the task is quite difficult in itself. Second, implicit discourse relation is quite frequent in text. For example, almost half the sentences in the British National Corpus held implicit discourse relations (Sporleder and Lascarides, 2008). Therefore, the task of implicit discourse relation recognition is the key to improving end-to-end discourse parser performance. To overcome the shortage of manually annotated training data, (Marcu and Echihabi, 2002) proposed a pattern-based approach to automatically generate training data from raw corpora. This line of research was followed by (Sporleder and Lascarides, 2008) and (Blair-Goldensohn, 2007). In these works, sentences containing certain words or phrases (e.g. but, although) were selected out from raw corpora using a patternbased approach and then these words or phrases were removed from these sentences. Thus the resulting sentences were used as synthetic training examples for implicit discourse relation recognition. Since there is ambiguity of a word or phrase serving for discourse connective (i.e., the ambiguity between discourse and non-discourse usage or the ambiguity between two or more discourse relations if the word or phrase is used as a discourse connective), the synthetic implicit data would contain a lot of noises. Later, with the release of manually annotated corpus, such as Penn Discourse Treebank 2.0 (PDTB) (Prasad et al., 2008), recent studies performed implicit discourse relation recognition on natural (i.e., genuine) implicit discourse data (Pitler et al., 2009) (Lin et al., 2009) (Wang et al., 2010) with the use of linguistically informed features and machine learning algorithms. (Sporleder and Lascarides, 2008) conducted a study of the pattern-based approach presented by (Marcu and Echihabi, 2002) and showed that the model built on synthetical implicit data has not generalize well on natural implicit data. They found some evidence that this behavior is largely independent of the classifiers used and seems to lie in the data itself (e.g., marked and unmarked examples may be too dissimilar linguistically and 476 removing unambiguous markers in the automatic labelling process may lead to a meaning shift in the examples). We state that in some cases it is true while in other cases it may not always be so. A simple example is given here: (E1) a. We can’t win. b. [but] We must keep trying. We may find that in this example whether the insertion or the removal of connective but would not lead to a redundant or missing information between the above two sentences. That is, discourse connectives can be inserted between or removed from two sentences without changing the semantic relations between them in some cases. Another similar observation is in the annotation procedure of PDTB. To label implicit discourse relation, annotators inserted connective which can best express the relation between sentences without any redundancy1. We see that there should be some linguistical similarities between explicit and implicit discourse examples. Therefore, the first question arises: can we exploit this kind of linguistic similarity between explicit and implicit discourse examples to improve implicit discourse relation recognition? In this paper, we propose a multi-task learning based method to improve the performance of implicit discourse relation recognition (as main task) with the help of relevant auxiliary tasks. Specifically, the main task is to recognize the implicit discourse relations based on genuine implicit discourse data and the auxiliary task is to recognize the implicit discourse relations based on synthetic implicit discourse data. According to the principle of multi-task learning, the learning model can be optimized by the shared part of the main task and the auxiliary tasks without bring unnecessary noise. That means, the model can learn from synthetic implicit data while it would not bring unnecessary noise from synthetic implicit data. Although (Sporleder and Lascarides, 2008) did not mention, we speculate that another possible reason for the reported worse performance may result from noises in synthetic implicit discourse data. These synthetic data can be generated from two sources: (1) raw corpora with the use of pattern-based approach in (Marcu and Echihabi, 1According to the PDTB Annotation Manual (PDTBGroup, 2008), if the insertion of connective leads to “redundancy”, the relation is annotated as Alternative lexicalizations (AltLex), not implicit. 2002) and (Sporleder and Lascarides, 2008), and (2) manually annotated explicit data with the removal of explicit discourse connectives. Obviously, the data generated from the second source is cleaner and more reliable than that from the first source. Therefore, the second question to address in this work is: whether synthetic implicit discourse data generated from explicit discourse data source (i.e., the second source) can lead to a better performance than that from raw corpora (i.e., the first source)? To answer this question, we will make a comparison of synthetic discourse data generated from two corpora, i.e., the BILLIP corpus and the explicit discourse data annotated in PDTB. The rest of this paper is organized as follows. Section 2 reviews related work on implicit discourse relation classification and multi-task learning. Section 3 presents our proposed multi-task learning method for implicit discourse relation classification. Section 4 provides the implementation technique details of the proposed multi-task method. Section 5 presents experiments and discusses results. Section 6 concludes this work. 2 Related Work 2.1 Implicit discourse relation classification 2.1.1 Unsupervised approaches Due to the lack of benchmark data for implicit discourse relation analysis, earlier work used unlabeled data to generate synthetic implicit discourse data. For example, (Marcu and Echihabi, 2002) proposed an unsupervised method to recognize four discourse relations, i.e., Contrast, Explanation-evidence, Condition and Elaboration. They first used unambiguous pattern to extract explicit discourse examples from raw corpus. Then they generated synthetic implicit discourse data by removing explicit discourse connectives from sentences extracted. In their work, they collected word pairs from synthetic data set as features and used machine learning method to classify implicit discourse relation. Based on this work, several researchers have extended the work to improve the performance of relation classification. For example, (Saito et al., 2006) showed that the use of phrasal patterns as additional features can help a word-pair based system for discourse relation prediction on a Japanese corpus. Furthermore, (Blair-Goldensohn, 2007) improved previous work with the use of parameter optimization, 477 topic segmentation and syntactic parsing. However, (Sporleder and Lascarides, 2008) showed that the training model built on a synthetic data set, like the work of (Marcu and Echihabi, 2002), may not be a good strategy since the linguistic dissimilarity between explicit and implicit data may hurt the performance of a model on natural data when being trained on synthetic data. 2.1.2 Supervised approaches This line of research work approaches this relation prediction problem by recasting it as a classification problem. (Soricut and Marcu, 2003) parsed the discourse structures of sentences on RST Bank data set (Carlson et al., 2001) which is annotated based on Rhetorical Structure Theory (Mann and Thompson, 1988). (Wellner et al., 2006) presented a study of discourse relation disambiguation on GraphBank (Wolf et al., 2005). Recently, (Pitler et al., 2009) (Lin et al., 2009) and (Wang et al., 2010) conducted discourse relation study on PDTB (Prasad et al., 2008) which has been widely used in this field. 2.1.3 Semi-supervised approaches Research work in this category exploited both labeled and unlabeled data for discourse relation prediction. (Hernault et al., 2010) presented a semi-supervised method based on the analysis of co-occurring features in labeled and unlabeled data. Very recently, (Hernault et al., 2011) introduced a semi-supervised work using structure learning method for discourse relation classification, which is quite relevant to our work. However, they performed discourse relation classification on both explicit and implicit data. And their work is different from our work in many aspects, such as, feature sets, auxiliary task, auxiliary data, class labels, learning framework, and so on. Furthermore, there is no explicit conclusion or evidence in their work to address the two questions raised in Section 1. Unlike their previous work, our previous work (Zhou et al., 2010) presented a method to predict the missing connective based on a language model trained on an unannotated corpus. The predicted connective was then used as a feature to classify the implicit relation. 2.2 Multi-task learning Multi-task learning is a kind of machine learning method, which learns a main task together with other related auxiliary tasks at the same time, using a shared representation. This often leads to a better model for the main task, because it allows the learner to use the commonality among the tasks. Many multi-task learning methods have been proposed in recent years, (Ando and Zhang, 2005a), (Argyriou et al., 2008), (Jebara, 2004), (Bonilla et al., 2008), (Evgeniou and Pontil, 2004), (Baxter, 2000), (Caruana, 1997), (Thrun, 1996). One group uses task relations as regularization terms in the objective function to be optimized. For example, in (Evgeniou and Pontil, 2004) the regularization terms make the parameters of models closer for similar tasks. Another group is proposed to find the common structure from data and then utilize the learned structure for multi-task learning (Argyriou et al., 2008) (Ando and Zhang, 2005b). 3 Multi-task Learning for Discourse Relation Prediction 3.1 Motivation The idea of using multi-task learning for implicit discourse relation classification is motivated by the observations that we have made on implicit discourse relation. On one hand, since building a hand-annotated implicit discourse relation corpus is costly and time consuming, most previous work attempted to use synthetic implicit discourse examples as training data. However, (Sporleder and Lascarides, 2008) found that the model trained on synthetic implicit data has not performed as well as expected in natural implicit data. They stated that the reason is linguistic dissimilarity between explicit and implicit discourse data. This indicates that straightly using synthetic implicit data as training data may not be helpful. On the other hand, as shown in Section 1, we observe that in some cases explicit discourse relation and implicit discourse relation can express the same meaning with or without a discourse connective. This indicates that in certain degree they must be similar to each other. If it is true, the synthetic implicit relations are expected to be helpful for implicit discourse relation classification. Therefore, what we have to do is to find a way to train a model which has the capabilities to learn from their similarity and to ignore their dissimilarity as well. To solve it, we propose a multi-task learning method for implicit discourse relation classi478 fication, where the classification model seeks the shared part through jointly learning main task and multiple auxiliary tasks. As a result, the model can be optimized by the similar shared part without bringing noise in the dissimilar part. Specifically, in this work, we use alternating structure optimization (ASO) (Ando and Zhang, 2005a) to construct the multi-task learning framework. ASO has been shown to be useful in a semi-supervised learning configuration for several NLP applications, such as, text chunking (Ando and Zhang, 2005b) and text classification (Ando and Zhang, 2005a). 3.2 Multi-task learning and ASO Generally, multi-task learning(MTL) considers m prediction problems indexed by ℓ∈{1, ..., m}, each with nℓsamples (Xℓ i , Y ℓ i ) for i ∈{1, ...nℓ} (Xi are input feature vectors and Yi are corresponding classification labels) and assumes that there exists a common predictive structure shared by these m problems. Generally, the joint linear model for MTL is to predict problem ℓin the following form: fℓ(Θ, X) = wT ℓX + vT ℓΘX, ΘΘT = I, (1) where I is the identity matrix, wℓand vℓare weight vectors specific to each problem ℓ, and Θ is the structure matrix shared by all the m predictors. The main goal of MTL is to learn a common good feature map ΘX for all the m problems. Several MTL methods have been presented to learn ΘX for all the m problems. In this work, we adopt the ASO method. Specifically, the ASO method adopted singular value decomposition (SVD) to obtain Θ and m predictors that minimize the empirical risk summed over all the m problems. Thus, the problem of optimization becomes the minimization of the joint empirical risk written as: m ∑ ℓ=1 ( nℓ ∑ i=1 L(fℓ(Θ, Xℓ i ), Yi) nℓ + λ||Wℓ||2) (2) where loss function L(.) quantifies the difference between the prediction f(Xi) and the true output Yi for each predictor, and λ is a regularization parameter for square regularization to control the model complexity. To minimize the empirical risk, ASO repeats the following alternating optimization procedure until a convergence criterion is met: 1) Fix (Θ, Vℓ), and find m predictors fℓthat minimize the above joint empirical risk. 2) Fix m predictors fℓ, and find (Θ, Vℓ) that minimizes the above joint empirical risk. 3.3 Auxiliary tasks There are two main principles to create auxiliary tasks. First, the auxiliary tasks should be automatically labeled in order to reduce the cost of manual labeling. Second, since the MTL model learns from the shared part of main task and auxiliary tasks, the auxiliary tasks should be quite relevant/similar to the main task. It is generally believed that the more the auxiliary tasks are relevant to the main task, the more the main task can benefit from the auxiliary tasks. Following these two principles, we create the auxiliary tasks by generating automatically labeled data as follows. Previous work (Marcu and Echihabi, 2002) and (Sporleder and Lascarides, 2008) adopted predefined pattern-based approach to generate synthetic labeled data, where each predefined pattern has one discourse relation label. In contrast, we adopt an automatic approach to generate synthetic labeled data, where each discourse connective between two texts serves as their relation label. The reason lies in the very strong connection between discourse connectives and discourse relations. For example, the connective but always indicates a contrast relation between two texts. And (Pitler et al., 2008) proved that using only connective itself, the accuracy of explicit discourse relation classification is over 93%. To build the mapping between discourse connective and discourse relation, for each connective, we count the times it appears in each relation and regard the relation in which it appears most frequently as its most relevant relation. Based on this mapping between connective and relation, we extract the synthetic labeled data containing the connective as training data for auxiliary tasks. For example, and appears 3, 000 times in PDTB as a discourse connective. Among them, it is manually annotated as an Expansion relation for 2, 938 times. So we regard the Expansion relation as its most relevant relation and generate a mapping pattern like: “and →Expansion”. Then we extract all sentences which contain discourse “and” and remove this connective “and” from sentences to generate synthetic implicit data. The resulting sentences are used in auxiliary task and automatically 479 marked as Expansion relation. 4 Implementation Details of Multi-task Learning Method 4.1 Data sets for main and auxiliary tasks To examine whether there is a difference in synthetic implicit data generated from unannotated and annotated corpus, we use two corpora. One is a hand-annotated explicit discourse corpus, i.e., the explicit discourse relations in PDTB, denoted as exp. Another is an unannotated corpus, i.e., BLLIP (David McClosky and Johnson., 2008). 4.1.1 Penn Discourse Treebank PDTB (Prasad et al., 2008) is the largest handannotated corpus of discourse relation so far. It contains 2, 312 Wall Street Journal (WSJ) articles. The sense label of discourse relations is hierarchically with three levels, i.e., class, type and subtype. The top level contains four major semantic classes: Comparison (denoted as Comp.), Contingency (Cont.), Expansion (Exp.) and Temporal (Temp.). For each class, a set of types is used to refine relation sense. The set of subtypes is to further specify the semantic contribution of each argument. In this paper, we focus on the top level (class) and the second level (type) relations because the subtype relations are too fine-grained and only appear in some relations. Both explicit and implicit discourse relations are labeled in PDTB. In our experiment, the implicit discourse relations are used in the main task and for evaluation. While the explicit discourse relations are used in the auxiliary task. A detailed description of the data sources for different tasks is given below. Data set for main task Following previous work in (Pitler et al., 2009) and (Zhou et al., 2010), the implicit relations in sections 2-20 are used as training data for the main task (denoted as imp) and the implicit relations in sections 21-22 are for evaluation. Table 1 shows the distribution of implicit relations. There are too few training instances for six second level relations (indicated by * in Table 1), so we removed these six relations in our experiments. Data set for auxiliary task All explicit instances in sections 00-24 in PDTB, i.e., 18, 459 instances, are used for auxiliary task (denoted as exp). Following the method described in Section 3.3, we build the mapping patterns between conTop level Second level train test Temp 736 83 Synchrony 203 28 Asynchronous 532 55 Cont 3333 279 Cause 3270 272 Pragmatic Cause* 64 7 Condition* 1 0 Pragmatic condition* 1 0 Comp 1939 152 Contrast 1607 134 Pragmatic contrast* 4 0 Concession 183 17 Pragmatic concession* 1 0 Exp 6316 567 Conjunction 2872 208 Instantiation 1063 119 Restatement 2405 213 Alternative 147 9 Exception* 0 0 List 338 12 Table 1: Distribution of implicit discourse relations in the top and second level of PDTB nectives and relations in PDTB and generate synthetic labeled data by removing the connectives. According to the most relevant relation sense of connective removed, the resulting instances are grouped into different data sets. 4.1.2 BLLIP BLLIP North American News Text (Complete) is used as unlabeled data source to generate synthetic labeled data. In comparison with the synthetic labeled data generated from the explicit relations in PDTB, the synthetic labeled data from BLLIP contains more noise. This is because the former data is manually annotated whether a word serves as discourse connective or not, while the latter does not manually disambiguate two types of ambiguity, i.e., whether a word serves as discourse connective or not, and the type of discourse relation if it is a discourse connective. Finally, we extract 26, 412 instances from BLLIP (denoted as BLLIP) and use them for auxiliary task. 4.2 Feature representation For both main task and auxiliary tasks, we adopt the following three feature types. These features are chosen due to their superior performance in previous work (Pitler et al., 2009) and our previous work (Zhou et al., 2010). Verbs: Following (Pitler et al., 2009), we extract the pairs of verbs from both text spans. The number of verb pairs which have the same highest 480 Levin verb class levels (Levin, 1993) is counted as a feature. Besides, the average length of verb phrases in each argument is included as a feature. In addition, the part of speech tags of the main verbs (e.g., base form, past tense, 3rd person singular present, etc.) in each argument, i.e., MD, VB, VBD, VBG, VBN, VBP, VBZ, are recorded as features, where we simply use the first verb in each argument as the main verb. Polarity: This feature records the number of positive, negated positive, negative and neutral words in both arguments and their cross product as well. For negated positives, we first locate the negated words in text span and then define the closely behind positive word as negated positive. The polarity of each word in arguments is derived from Multi-perspective Question Answering Opinion Corpus (MPQA) (Wilson et al., 2009). Modality: We examine six modal words (i.e., can, may, must, need, shall, will) including their various tenses or abbreviation forms in both arguments. This feature records the presence or absence of modal words in both arguments and their cross product. 4.3 Classifiers used multi-task learning We extract the above linguistically informed features from two synthetic implicit data sets (i.e., BLLIP and exp) to learn the auxiliary classifier and from the natural implicit data set (i.e., imp) to learn the main classifier. Under the ASO-based multitask learning framework, the model of main task learns from the shared part of main task and auxiliary tasks. Specifically, we adopt multiple binary classification to build model for main task. That is, for each discourse relation, we build a binary classifier. 5 Experiments and Results 5.1 Experiments Although previous work has been done on PDTB (Pitler et al., 2009) and (Lin et al., 2009), we cannot make a direct comparison with them because various experimental conditions, such as, different classification strategies (multi-class classification, multiple binary classification), different data preparation (feature extraction and selection), different benchmark data collections (different sections for training and test, different levels of discourse relations), different classifiers with various parameters (MaxEnt, Na¨ıve Bayes, SVM, etc) and even different evaluation methods (F1, accuracy) have been adopted by different researchers. Therefore, to address the two questions raised in Section 1 and to make the comparison reliable and reasonable, we performed experiments on the top and second level of PDTB using single task learning and multi-task learning, respectively. The systems using single task learning serve as baseline systems. Under the single task learning, various combinations of exp and BLLIP data are incorporated with imp data for the implicit discourse relation classification task. We hypothesize that synthetical implicit data would contribute to the main task, i.e., the implicit discourse relation classification. Specifically, the natural implicit data (i.e., imp) are used to create main task and the synthetical implicit data (exp or BLLIP) are used to create auxiliary tasks for the purpose of optimizing the objective functions of main task. If the hypothesis is correct, the performance of main task would be improved by auxiliary tasks created from synthetical implicit data. Thus in the experiments of multi-task learning, only natural implicit examples (i.e., imp) data are used for main task training while different combinations of synthetical implicit examples (exp and BLLIP) are used for auxiliary task training. We adopt precision, recall and their combination F1 for performance evaluation. We also perform one-tailed t-test to validate if there is significant difference between two methods in terms of F1 performance analysis. 5.2 Results Table 2 summarizes the experimental results under single and multi-task learning on the top level of four PDTB relations with respect to different combinations of synthetic implicit data. For each relation, the first three rows indicate the results of using different single training data under single task learning and the last three rows indicate the results using different combinations of training data under single task and multi-task learning. The best F1 for every relation is shown in bold font. From this table, we can find that on four relations, our multi-task learning systems achieved the best performance using the combination of exp and BLLIP synthetic data. Table 3 summarizes the best single task and the best multi-task learning results on the second level of PDTB. For four relations, i.e., Synchrony, Con481 Single-task Multi-task Level 1 class Data P R F1 Data Data P R F1 (main) (aux) Comp. imp 21.43 37.50 27.27 BLLIP 12.68 53.29 20.48 exp 15.25 50.66 23.44 imp + exp 16.94 40.13 23.83 imp exp 22.94 49.34 30.90 imp + BLLIP 13.56 44.08 20.74 imp BLLIP 20.47 63.16 30.92 imp + exp + BLLIP 14.54 38.16 21.05 imp exp + BLLIP 23.47 48.03 31.53 Cont. imp 37.65 43.73 40.46 BLLIP 33.72 31.18 32.40 exp 35.24 26.52 30.27 imp + exp 39.00 13.98 20.58 imp exp 39.94 45.52 42.55 imp + BLLIP 37.30 24.73 29.74 imp BLLIP 37.80 63.80 47.47 imp + exp + BLLIP 39.37 31.18 34.80 imp exp + BLLIP 35.90 70.25 47.52 Exp. imp 56.59 66.67 61.21 BLLIP 53.29 40.04 45.72 exp 57.97 58.38 58.17 imp + exp 57.32 65.61 61.18 imp exp 59.14 67.90 63.22 imp + BLLIP 56.28 65.61 60.59 imp BLLIP 53.80 99.82 69.92 imp + exp + BLLIP 55.81 65.26 60.16 imp exp + BLLIP 53.90 99.82 70.01 Temp. imp 16.46 63.86 26.17 BLLIP 17.31 43.37 24.74 exp 15.46 36.14 21.66 imp + exp 15.35 39.76 22.15 imp exp 18.60 63.86 28.80 imp + BLLIP 14.74 33.73 20.51 imp BLLIP 18.12 67.47 28.57 imp + exp + BLLIP 15.94 39.76 22.76 imp exp + BLLIP 19.08 65.06 29.51 Table 2: Performance of precision, recall and F1 for 4 Level 1 relation classes. “-” indicates N.A. Single-task Multi-task Level 2 type Data P R F1 Data Data P R F1 (main) (aux) Asynchronous imp 11.36 74.55 19.71 imp exp + BLLIP 23.08 21.82 22.43 Synchrony imp imp exp + BLLIP Cause imp 36.38 64.34 46.48 imp exp + BLLIP 36.01 67.65 47.00 Contrast imp 20.07 42.54 27.27 imp exp + BLLIP 20.70 52.99 29.77 Concession imp imp exp + BLLIP Conjunction imp 26.35 63.46 37.24 imp exp + BLLIP 26.29 73.56 38.73 Instantiation imp 22.78 53.78 32.00 imp exp + BLLIP 22.55 57.98 32.47 Restatement imp 23.11 67.61 34.45 imp exp + BLLIP 26.93 53.99 35.94 Alternative imp imp exp + BLLIP List imp imp exp + BLLIP Table 3: Performance of precision, recall and F1 for 10 Level 2 relation types. “-” indicates 0.00. cession, Alternative and List, the classifier labels no instances due to the small percentages for these four types. Table 4 summarizes the one-tailed t-test results on the top level of PDTB between the best single task learning system (i.e., imp) and three multitask learning systems (imp:exp+BLLIP indicates that imp is used for main task and the combination of exp and BLLIP are for auxiliary task). The systems with insignificant performance differences are grouped into one set and ”>” and ”>>” denote better than at significance level 0.01 and 0.001 respectively. 5.3 Discussion From Table 2 to Table 4, several findings can be found as follows. We can see that the multi-task learning systems perform consistently better than the single task learning systems for the prediction of implicit discourse relations. Our best multi-task learning system achieves an averaged F1 improvement of 5.86% over the best single task learning system on the top level of PDTB relations. Specifically, for 482 Class One-tailed t-test results Comp. (imp:exp+BLLIP, imp:exp, imp:BLLIP) >> (imp) Cont. (imp:exp+BLLIP, imp:BLLIP) >> (imp:exp) > (imp) Exp. (imp:exp+BLLIP, imp:BLLIP) >> (imp:exp) > (imp) Temp. (imp:exp+BLLIP, imp:exp, imp:BLLIP) >> (imp) Table 4: Statistical significance tests results. the relations Comp., Cont., Exp., Temp., our best multi-task learning system achieve 4.26%, 7.06%, 8.8% and 3.34% F1 improvements over the best single task learning system. It indicates that using synthetic implicit data as auxiliary task greatly improves the performance of the main task. This is confirmed by the following t-tests in Table 4. In contrast to the performance of multi-task learning, the performance of the best single task learning system has been achieved on natural implicit discourse data alone. This finding is consistent with (Sporleder and Lascarides, 2008). It indicates that under single task learning, directly adding synthetic implicit data to increase the number of training data cannot be helpful to implicit discourse relation classification. The possible reasons result from (1) the different nature of implicit and explicit discourse data in linguistics and (2) the noise brought from synthetic implicit data. Based on the above analysis, we state that it is the way of utilizing synthetic implicit data that is important for implicit discourse relation classification. Although all three multi-task learning systems outperformed single task learning systems, we find that the two synthetic implicit data sets have not been shown a universally consistent performance on four top level PDTB relations. On one hand, for the relations Comp. and Temp., the performance of the two synthetic implicit data sets alone and their combination are comparable to each other and there is no significant difference between them. On the other hand, for the relations Cont. and Exp., the performance of exp data is inferior to that of BLLIP and their combination. This is contrary to our original expectation that exp data which has been manually annotated for discourse connective disambiguation should outperform BLLIP which contains a lot of noise. This finding indicates that under the multi-task learning, it may not be worthy of using manually annotated corpus to generate auxiliary data. It is quite promising since it can provide benefits to reducing the cost of human efforts on corpus annotation. 5.4 Ambiguity Analysis Although our experiments show that synthetic implicit data can help implicit discourse relation classification under multi-task learning framework, the overall performance is still quite low (44.64% in F1). Therefore, we analyze the types of ambiguity in relations and connectives in order to motivate possible future work. 5.4.1 Ambiguity of implicit relation Without explicit discourse connective, the implicit discourse relation instance can be understood in two or more different ways. Given the example E2 in PDTB, the PDTB annotators explain it as Contingency or Expansion relation and manually insert corresponding implicit connective for one thing or because to express its relation. (E2) Arg1:Now the stage is set for the battle to play out Arg2:The anti-programmers are getting some helpful thunder from Congress Connective1:because Sense1:Contingency.Cause.Reason Connective2:for one thing Sense2:Expansion.Instantiation (wsj 0118) Thus the ambiguity of implicit discourse relations makes this task difficult in itself. 5.4.2 Ambiguity of discourse connectives As we mentioned before, even given an explicit discourse connective in text, its discourse relation still can be explained in two or more different ways. And for different connectives, the ambiguity of relation senses is quite different. That is, the most frequent sense is not always the only sense that a connective expresses. In example E3, “since” is explained by annotators to express Temporal or Contingency relation. (E3) Arg1:MiniScribe has been on the rocks Arg2:since it disclosed early this year that its earnings reports for 1988 weren’t accurate. 483 Sense1:Temporal.Asynchronous.Succession Sense2:Contingency.Cause.Reason (wsj 0003) In PDTB, “since” appears 184 times in explicit discourse relations. It expresses Temporal relation for 80 times, Contingency relation for 94 times and both Temporal and Contingency for 10 time (like example E3). Therefore, although we use its most frequent sense, i.e., Contingency, to automatically extract sentences and label them, almost less than half of them actually express Temporal relation. Thus the ambiguity of discourse connectives is another source which has brought noise to data when we generate synthetical implicit discourse relation. 6 Conclusions In this paper, we present a multi-task learning method to improve implicit discourse relation classification by leveraging synthetic implicit discourse data. Results on PDTB show that under the framework of multi-task learning, using synthetic discourse data as auxiliary task significantly improves the performance of main task. Our best multi-task learning system achieves an averaged F1 improvement of 5.86% over the best single task learning system on the top level of PDTB relations. Specifically, for the relations Comp., Cont., Exp., Temp., our best multi-task learning system achieves 4.26%, 7.06%, 8.8%, and 3.34% F1 improvements over a state of the art baseline system. This indicates that it is the way of utilizing synthetic discourse examples that is important for implicit discourse relation classification. Acknowledgements This research is supported by grants from National Natural Science Foundation of China (No.60903093), Shanghai Pujiang Talent Program (No.09PJ1404500), Doctoral Fund of Ministry of Education of China (No. 20090076120029) and Shanghai Knowledge Service Platform Project (No. ZF1213). References R.K. Ando and T. Zhang. 2005a. A framework for learning predictive structures from multiple tasks and unlabeled data. The Journal of Machine Learning Research, 6:1817–1853. R.K. Ando and T. Zhang. 2005b. A high-performance semi-supervised learning method for text chunking. pages 1–9. Association for Computational Linguistics. Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics. A. Argyriou, C.A. Micchelli, M. Pontil, and Y. Ying. 2008. A spectral regularization framework for multi-task structure learning. Advances in Neural Information Processing Systems, 20:2532. J. Baxter. 2000. A model of inductive bias learning. J. Artif. Intell. Res. (JAIR), 12:149–198. S.J. Blair-Goldensohn. 2007. Long-answer question answering and rhetorical-semantic relations. Ph.D. thesis. E. Bonilla, K.M. Chai, and C. Williams. 2008. Multitask gaussian process prediction. Advances in Neural Information Processing Systems, 20(October). L. Carlson, D. Marcu, and M.E. Okurowski. 2001. Building a discourse-tagged corpus in the framework of rhetorical structure theory. pages 1–10. Association for Computational Linguistics. Proceedings of the Second SIGdial Workshop on Discourse and Dialogue-Volume 16. R. Caruana. 1997. Multitask learning. Machine Learning, 28(1):41–75. P. Cimiano, U. Reyle, and J. Saric. 2005. Ontologydriven discourse analysis for information extraction. Data and Knowledge Engineering, 55(1):59–83. Eugene Charniak David McClosky and Mark Johnson. 2008. Bllip north american news text, complete. T. Evgeniou and M. Pontil. 2004. Regularized multi– task learning. pages 109–117. ACM. Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining. H. Hernault, D. Bollegala, and M. Ishizuka. 2010. A semi-supervised approach to improve classification of infrequent discourse relations using feature vector extension. pages 399–409. Association for Computational Linguistics. Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. H. Hernault, D. Bollegala, and M. Ishizuka. 2011. Semi-supervised discourse relation classification with structural learning. In Proceedings of the 12th international conference on Computational linguistics and intelligent text processing - Volume Part I, CICLing’11, pages 340–352, Berlin, Heidelberg. Springer-Verlag. T. Jebara. 2004. Multi-task feature and kernel selection for svms. page 55. ACM. Proceedings of the twenty-first international conference on Machine learning. B. Levin. 1993. English verb classes and alternations: A preliminary investigation, volume 348. University of Chicago press Chicago, IL:. 484 Z. Lin, M.Y. Kan, and H.T. Ng. 2009. Recognizing implicit discourse relations in the penn discourse treebank. pages 343–351. Association for Computational Linguistics. Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1-Volume 1. W.C. Mann and S.A. Thompson. 1988. Rhetorical structure theory: Toward a functional theory of text organization. Text-Interdisciplinary Journal for the Study of Discourse, 8(3):243–281. D. Marcu and A. Echihabi. 2002. An unsupervised approach to recognizing discourse relations. pages 368–375. Association for Computational Linguistics. Proceedings of the 40th Annual Meeting on Association for Computational Linguistics. PDTB-Group. 2008. The penn discourse treebank 2.0 annotation manual. Technical report, Institute for Research in Cognitive Science, University of Pennsylvania. E. Pitler, M. Raghupathy, H. Mehta, A. Nenkova, A. Lee, and A. Joshi. 2008. Easily identifiable discourse relations. Citeseer. Proceedings of the 22nd International Conference on Computational Linguistics (COLING 2008), Manchester, UK, August. E. Pitler, A. Louis, and A. Nenkova. 2009. Automatic sense prediction for implicit discourse relations in text. pages 683–691. Association for Computational Linguistics. Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2-Volume 2. Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Miltsakaki, Livio Robaldo, Aravind Joshi, and Bonnie Webber. 2008. The penn discourse treebank 2.0. In In Proceedings of LREC. M. Saito, K. Yamamoto, and S. Sekine. 2006. Using phrasal patterns to identify discourse relations. pages 133–136. Association for Computational Linguistics. Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers on XX. R. Soricut and D. Marcu. 2003. Sentence level discourse parsing using syntactic and lexical information. pages 149–156. Association for Computational Linguistics. Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology-Volume 1. C. Sporleder and A. Lascarides. 2008. Using automatically labelled examples to classify rhetorical relations: An assessment. Natural Language Engineering, 14(03):369–416. S. Thrun. 1996. Is learning the n-th thing any easier than learning the first? Advances in Neural Information Processing Systems, pages 640–646. S. Verberne, L. Boves, N. Oostdijk, and P.A. Coppen. 2007. Evaluating discourse-based answer extraction for why-question answering. pages 735–736. ACM. Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval. W.T. Wang, J. Su, and C.L. Tan. 2010. Kernel based discourse relation recognition with temporal ordering information. pages 710–719. Association for Computational Linguistics. Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. B. Wellner, J. Pustejovsky, C. Havasi, A. Rumshisky, and R. Sauri. 2006. Classification of discourse coherence relations: An exploratory study using multiple knowledge sources. pages 117–125. Association for Computational Linguistics. Proceedings of the 7th SIGdial Workshop on Discourse and Dialogue. T. Wilson, J. Wiebe, and P. Hoffmann. 2009. Recognizing contextual polarity: An exploration of features for phrase-level sentiment analysis. Computational Linguistics, 35(3):399–433. F. Wolf, E. Gibson, A. Fisher, and M. Knight. 2005. The discourse graphbank: A database of texts annotated with coherence relations. Linguistic Data Consortium. Z.M. Zhou, Y. Xu, Z.Y. Niu, M. Lan, J. Su, and C.L. Tan. 2010. Predicting discourse connectives for implicit discourse relation recognition. pages 1507– 1514. Association for Computational Linguistics. Proceedings of the 23rd International Conference on Computational Linguistics: Posters. 485
2013
47
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 486–496, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Combining Intra- and Multi-sentential Rhetorical Parsing for Document-level Discourse Analysis Shafiq Joty∗ [email protected] Qatar Computing Research Institute Qatar Foundation Doha, Qatar Giuseppe Carenini, Raymond Ng, Yashar Mehdad {carenini, rng, mehdad}@cs.ubc.ca Department of Computer Science University of British Columbia Vancouver, Canada Abstract We propose a novel approach for developing a two-stage document-level discourse parser. Our parser builds a discourse tree by applying an optimal parsing algorithm to probabilities inferred from two Conditional Random Fields: one for intrasentential parsing and the other for multisentential parsing. We present two approaches to combine these two stages of discourse parsing effectively. A set of empirical evaluations over two different datasets demonstrates that our discourse parser significantly outperforms the stateof-the-art, often by a wide margin. 1 Introduction Discourse of any kind is not formed by independent and isolated textual units, but by related and structured units. Discourse analysis seeks to uncover such structures underneath the surface of the text, and has been shown to be beneficial for text summarization (Louis et al., 2010; Marcu, 2000b), sentence compression (Sporleder and Lapata, 2005), text generation (Prasad et al., 2005), sentiment analysis (Somasundaran, 2010) and question answering (Verberne et al., 2007). Rhetorical Structure Theory (RST) (Mann and Thompson, 1988), one of the most influential theories of discourse, represents texts by labeled hierarchical structures, called Discourse Trees (DTs), as exemplified by a sample DT in Figure 1. The leaves of a DT correspond to contiguous Elementary Discourse Units (EDUs) (six in the example). Adjacent EDUs are connected by rhetorical relations (e.g., Elaboration, Contrast), forming larger discourse units (represented by internal ∗This work was conducted at the University of British Columbia, Vancouver, Canada. nodes), which in turn are also subject to this relation linking. Discourse units linked by a rhetorical relation are further distinguished based on their relative importance in the text: nucleus being the central part, whereas satellite being the peripheral one. Discourse analysis in RST involves two subtasks: discourse segmentation is the task of identifying the EDUs, and discourse parsing is the task of linking the discourse units into a labeled tree. While recent advances in automatic discourse segmentation and sentence-level discourse parsing have attained accuracies close to human performance (Fisher and Roark, 2007; Joty et al., 2012), discourse parsing at the document-level still poses significant challenges (Feng and Hirst, 2012) and the performance of the existing document-level parsers (Hernault et al., 2010; Subba and DiEugenio, 2009) is still considerably inferior compared to human gold-standard. This paper aims to reduce this performance gap and take discourse parsing one step further. To this end, we address three key limitations of existing parsers as follows. First, existing discourse parsers typically model the structure and the labels of a DT separately in a pipeline fashion, and also do not consider the sequential dependencies between the DT constituents, which has been recently shown to be critical (Feng and Hirst, 2012). To address this limitation, as the first contribution, we propose a novel document-level discourse parser based on probabilistic discriminative parsing models, represented as Conditional Random Fields (CRFs) (Sutton et al., 2007), to infer the probability of all possible DT constituents. The CRF models effectively represent the structure and the label of a DT constituent jointly, and whenever possible, capture the sequential dependencies between the constituents. Second, existing parsers apply greedy and suboptimal parsing algorithms to build the DT for a document. To cope with this limitation, our CRF models support a probabilistic bottom-up parsing 486 But he added: "Some people use the purchasers’ index as a leading indicator, some use it as a coincident indicator. But the thing it’s supposed to measure -- manufacturing strength -it missed altogether last month." <P> Elaboration Same-Unit Contrast Contrast Attribution (1) (2) (3) (4) (5) (6) Figure 1: Discourse tree for two sentences in RST-DT. Each of the sentences contains three EDUs. The second sentence has a well-formed discourse tree, but the first sentence does not have one. algorithm which is non-greedy and optimal. Third, existing discourse parsers do not discriminate between intra-sentential (i.e., building the DTs for the individual sentences) and multisentential parsing (i.e., building the DT for the document). However, we argue that distinguishing between these two conditions can result in more effective parsing. Two separate parsing models could exploit the fact that rhetorical relations are distributed differently intra-sententially vs. multi-sententially. Also, they could independently choose their own informative features. As another key contribution of our work, we devise two different parsing components: one for intrasentential parsing, the other for multi-sentential parsing. This provides for scalable, modular and flexible solutions, that can exploit the strong correlation observed between the text structure (sentence boundaries) and the structure of the DT. In order to develop a complete and robust discourse parser, we combine our intra-sentential and multi-sentential parsers in two different ways. Since most sentences have a well-formed discourse sub-tree in the full document-level DT (for example, the second sentence in Figure 1), our first approach constructs a DT for every sentence using our intra-sentential parser, and then runs the multi-sentential parser on the resulting sentencelevel DTs. However, this approach would disregard those cases where rhetorical structures violate sentence boundaries. For example, consider the first sentence in Figure 1. It does not have a well-formed sub-tree because the unit containing EDUs 2 and 3 merges with the next sentence and only then is the resulting unit merged with EDU 1. Our second approach, in an attempt of dealing with these cases, builds sentence-level sub-trees by applying the intra-sentential parser on a sliding window covering two adjacent sentences and by then consolidating the results produced by overlapping windows. After that, the multi-sentential parser takes all these sentence-level sub-trees and builds a full rhetorical parse for the document. While previous approaches have been tested on only one corpus, we evaluate our approach on texts from two very different genres: news articles and instructional how-to-do manuals. The results demonstrate that our contributions provide consistent and statistically significant improvements over previous approaches. Our final result compares very favorably to the result of state-of-the-art models in document-level discourse parsing. In the rest of the paper, after discussing related work in Section 2, we present our discourse parsing framework in Section 3. In Section 4, we describe the intra- and multi-sentential parsing components. Section 5 presents the two approaches to combine the two stages of parsing. The experiments and error analysis, followed by future directions are discussed in Section 6. Finally, we summarize our contributions in Section 7. 2 Related work The idea of staging document-level discourse parsing on top of sentence-level discourse parsing was investigated in (Marcu, 2000a; LeThanh et al., 2004). These approaches mainly rely on discourse markers (or cues), and use hand-coded rules to build DTs for sentences first, then for paragraphs, and so on. However, often rhetorical relations are not explicitly signaled by discourse markers (Marcu and Echihabi, 2002), and discourse structures do not always correspond to paragraph structures (Sporleder and Lascarides, 2004). Therefore, rather than relying on hand-coded rules based on discourse markers, recent approaches employ supervised machine learning techniques with a large set of informative features. Hernault et al., (2010) presents the publicly available HILDA parser. Given the EDUs in a doc487 Elaboration Joint AttributionSame-Unit Contrast Explanation 0 5 10 15 20 25 30 Multi-sentential Intra-sentential Figure 2: Distributions of six most frequent relations in intra-sentential and multi-sentential parsing scenarios. ument, HILDA iteratively employs two Support Vector Machine (SVM) classifiers in pipeline to build the DT. In each iteration, a binary classifier first decides which of the adjacent units to merge, then a multi-class classifier connects the selected units with an appropriate relation label. They evaluate their approach on the RST-DT corpus (Carlson et al., 2002) of news articles. On a different genre of instructional texts, Subba and Di-Eugenio (2009) propose a shift-reduce parser that relies on a classifier for relation labeling. Their classifier uses Inductive Logic Programming (ILP) to learn first-order logic rules from a set of features including compositional semantics. In this work, we address the limitations of these models (described in Section 1) introducing our novel discourse parser. 3 Our Discourse Parsing Framework Given a document with sentences already segmented into EDUs, the discourse parsing problem is determining which discourse units (EDUs or larger units) to relate (i.e., the structure), and how to relate them (i.e., the labels or the discourse relations) in the resulting DT. Since we already have an accurate sentence-level discourse parser (Joty et al., 2012), a straightforward approach to document-level parsing could be to simply apply this parser to the whole document. However this strategy would be problematic because of scalability and modeling issues. Note that the number of valid trees grows exponentially with the number of EDUs in a document.1 Therefore, an exhaustive search over the valid trees is often unfeasible, even for relatively small documents. For modeling, the problem is two-fold. On the one hand, it appears that rhetorical relations are distributed differently intra-sententially vs. multisententially. For example, Figure 2 shows a comparison between the two distributions of six most 1For n + 1 EDUs, the number of valid discourse trees is actually the Catalan number Cn. model Algorithm Sentences segmented into EDUs Document-level discourse tree Intra-sentential parser Multi-sentential parser model Algorithm Figure 3: Discourse parsing framework. frequent relations on a development set containing 20 randomly selected documents from RST-DT. Notice that relations Attribution and Same-Unit are more frequent than Joint in intra-sentential case, whereas Joint is more frequent than the other two in multi-sentential case. On the other hand, different kinds of features are applicable and informative for intra-sentential vs. multi-sentential parsing. For example, syntactic features like dominance sets (Soricut and Marcu, 2003) are extremely useful for sentence-level parsing, but are not even applicable in multi-sentential case. Likewise, lexical chain features (Sporleder and Lascarides, 2004), that are useful for multi-sentential parsing, are not applicable at the sentence level. Based on these observations, our discourse parsing framework comprises two separate modules: an intra-sentential parser and a multisentential parser (Figure 3). First, the intrasentential parser produces one or more discourse sub-trees for each sentence. Then, the multisentential parser generates a full DT for the document from these sub-trees. Both of our parsers have the same two components: a parsing model assigns a probability to every possible DT, and a parsing algorithm identifies the most probable DT among the candidate DTs in that scenario. While the two models are rather different, the same parsing algorithm is shared by the two modules. Staging multi-sentential parsing on top of intra-sentential parsing in this way allows us to exploit the strong correlation between the text structure and the DT structure as explained in detail in Section 5. Before describing our parsing models and the parsing algorithm, we introduce some terminology that we will use throughout the paper. Following (Joty et al., 2012), a DT can be formally represented as a set of constituents of the form R[i, m, j], referring to a rhetorical relation R between the discourse unit containing EDUs i through m and the unit containing EDUs m+1 through j. For example, the DT for the second sentence in Figure 1 can be represented as 488 {Elaboration-NS[4,4,5], Same-Unit-NN[4,5,6]}. Notice that a relation R also specifies the nuclearity statuses of the discourse units involved, which can be one of Nucleus-Satellite (NS), SatelliteNucleus (SN) and Nucleus-Nucleus (NN). 4 Parsing Models and Parsing Algorithm The job of our intra-sentential and multi-sentential parsing models is to assign a probability to each of the constituents of all possible DTs at the sentence level and at the document level, respectively. Formally, given the model parameters Θ, for each possible constituent R[i, m, j] in a candidate DT at the sentence or document level, the parsing model estimates P(R[i, m, j]|Θ), which specifies a joint distribution over the label R and the structure [i, m, j] of the constituent. 4.1 Intra-Sentential Parsing Model Recently, we proposed a novel parsing model for sentence-level discourse parsing (Joty et al., 2012), that outperforms previous approaches by effectively modeling sequential dependencies along with structure and labels jointly. Below we briefly describe the parsing model, and show how it is applied to obtain the probabilities of all possible DT constituents at the sentence level. Figure 4 shows the intra-sentential parsing model expressed as a Dynamic Conditional Random Field (DCRF) (Sutton et al., 2007). The observed nodes Uj in a sequence represent the discourse units (EDUs or larger units). The first layer of hidden nodes are the structure nodes, where Sj∈{0, 1} denotes whether two adjacent discourse units Uj−1 and Uj should be connected or not. The second layer of hidden nodes are the relation nodes, with Rj∈{1 . . . M} denoting the relation between two adjacent units Uj−1 and Uj, where M is the total number of relations in the relation set. The connections between adjacent nodes in a hidden layer encode sequential dependencies between the respective hidden nodes, and can enforce constraints such as the fact that a Sj= 1 must not follow a Sj−1= 1. The connections between the two hidden layers model the structure and the relation of a DT (sentence-level) constituent jointly. To obtain the probability of the constituents of all candidate DTs for a sentence, we apply the parsing model recursively at different levels of the DT and compute the posterior marginals over the relation-structure pairs. To illustrate the U U U U U 2 2 2 3 j t-1 t S S S S S R R R R R 3 3 j j t-1 t-1 t Unit sequence at level i Structure sequence Relation sequence U1 t Figure 4: A chain-structured DCRF as our intrasentential parsing model. process, let us assume that the sentence contains four EDUs. At the first (bottom) level, when all the units are the EDUs, there is only one possible unit sequence to which we apply our DCRF model (Figure 5(a)). We compute the posterior marginals P(R2, S2=1|e1, e2, e3, e4, Θ), P(R3, S3=1|e1, e2, e3, e4, Θ) and P(R4, S4=1|e1, e2, e3, e4, Θ) to obtain the probability of the constituents R[1, 1, 2], R[2, 2, 3] and R[3, 3, 4], respectively. At the second level, there are three possible unit sequences (e1:2, e3, e4), (e1,e2:3, e4) and (e1,e2,e3:4). Figure 5(b) shows their corresponding DCRFs. The posterior marginals P(R3, S3=1|e1:2,e3,e4,Θ), P(R2:3 S2:3=1|e1,e2:3,e4,Θ), P(R4, S4=1|e1,e2:3,e4,Θ) and P(R3:4, S3:4=1|e1,e2,e3:4,Θ) computed from the three sequences correspond to the probability of the constituents R[1, 2, 3], R[1, 1, 3], R[2, 3, 4] and R[2, 2, 4], respectively. Similarly, we attain the probability of the constituents R[1, 1, 4], R[1, 2, 4] and R[1, 3, 4] by computing their respective posterior marginals from the three possible sequences at the third (top) level. e 1 e e 2 2 2 3 S S3 R R3 (a) e 1 e S R 1:2 3 3 3 e e S R 2:3 2:3 (b) 2:3 e4 S4 R4 e4 S4 R4 e4 S4 R4 1 e e S R 2 2 2 e 3:4 S3:4 R3:4 1 e S R 1:3 4 4 4 e e S R 2:4 2:4 (c) 2:4 e e S R 1:2 e 3:4 3:4 3:4 (i) (ii) (iii) (i) (ii) (iii) Figure 5: Our parsing model applied to the sequences at different levels of a sentence-level DT. (a) Only possible sequence at the first level, (b) Three possible sequences at the second level, (c) Three possible sequences at the third level. At this point what is left to be explained is how we generate all possible sequences for a given number of EDUs in a sentence. Algorithm 1 demonstrates how we do that. More specifically, to compute the probabilities of each DT con489 stituent R[i, k, j], we need to generate sequences like (e1, · · · , ei−1, ei:k, ek+1:j, ej+1, · · · , en) for 1 ≤i ≤k < j ≤n. In doing so, we may generate some duplicate sequences. Clearly, the sequence (e1, · · · , ei−1, ei:i, ei+1:j, ej+1, · · · , en) for 1 ≤i ≤k < j < n is already considered for computing the probability of R[i + 1, j, j + 1]. Therefore, it is a duplicate sequence that we exclude from our list of all possible sequences. Input: Sequence of EDUs: (e1, e2, · · · , en) Output: List of sequences: L for i = 1 →n −1 do for j = i + 1 →n do if j == n then for k = i →j −1 do L.append ((e1, .., ei−1, ei:k, ek+1:j, ej+1, .., en)) end else for k = i + 1 →j −1 do L.append ((e1, .., ei−1, ei:k, ek+1:j, ej+1, .., en)) end end end end Algorithm 1: Generating all possible sequences for a sentence with n EDUs. Once we obtain the probability of all possible DT constituents, the discourse sub-trees for the sentences are built by applying an optimal probabilistic parsing algorithm (Section 4.4) using one of the methods described in Section 5. 4.2 Multi-Sentential Parsing Model Given the discourse units (sub-trees) for all the sentences of a document, a simple approach to build the rhetorical tree of the document would be to apply a new DCRF model, similar to the one in Figure 4 (with different parameters), to all the possible sequences generated from these units to infer the probability of all possible higher-order constituents. However, the number of possible sequences and their length increase with the number of sentences in a document. For example, assuming that each sentence has a well-formed DT, for a document with n sentences, Algorithm 1 generates O(n3) sequences, where the sequence at the bottom level has n units, each of the sequences at the second level has n-1 units, and so on. Since the model in Figure 4 has a “fat” chain structure, U U t-1 t S Rt Adjacent Units at level i Structure Relation t Figure 6: A CRF as a multi-sentential parsing model. we could use forwards-backwards algorithm for exact inference in this model (Sutton and McCallum, 2012). However, forwards-backwards on a sequence containing T units costs O(TM2) time, where M is the number of relations in our relation set. This makes the chain-structured DCRF model impractical for multi-sentential parsing of long documents, since learning requires to run inference on every training sequence with an overall time complexity of O(TM2n3) per document. Our model for multi-sentential parsing is shown in Figure 6. The two observed nodes Ut−1 and Ut are two adjacent discourse units. The (hidden) structure node S∈{0, 1} denotes whether the two units should be connected or not. The hidden node R∈{1 . . . M} represents the relation between the two units. Notice that like the previous model, this is also an undirected graphical model. It becomes a CRF if we directly model the hidden (output) variables by conditioning its clique potential (or factor) φ on the observed (input) variables: P(Rt, St|x, Θ) = 1 Z(x, Θ)φ(Rt, St|x, Θ) (1) where x represents input features extracted from the observed variables Ut−1 and Ut, and Z(x, Θ) is the partition function. We use a log-linear representation of the factor: φ(Rt, St|x, Θ) = exp(ΘT f(Rt, St, x)) (2) where f(Rt, St, x) is a feature vector derived from the input features x and the labels Rt and St, and Θ is the corresponding weight vector. Although, this model is similar in spirit to the model in Figure 4, we now break the chain structure, which makes the inference much faster (i.e., complexity of O(M2)). Breaking the chain structure also allows us to balance the data for training (equal number instances with S=1 and S=0), which dramatically reduces the learning time of the model. We apply our model to all possible adjacent units at all levels for the multi-sentential case, and 490 compute the posterior marginals of the relationstructure pairs P(Rt, St=1|Ut−1, Ut, Θ) to obtain the probability of all possible DT constituents. 4.3 Features Used in our Parsing Models Table 1 summarizes the features used in our parsing models, which are extracted from two adjacent units Ut−1 and Ut. Since most of these features are adopted from previous studies (Joty et al., 2012; Hernault et al., 2010), we briefly describe them. Organizational features include the length of the units as the number of EDUs and tokens. It also includes the distances of the units from the beginning and end of the sentence (or text in the multi-sentential case). Text structural features indirectly capture the correlation between text structure and rhetorical structure by counting the number of sentence and paragraph boundaries in the units. Discourse markers (e.g., because, although) carry informative clues for rhetorical relations (Marcu, 2000a). Rather than using a fixed list of discourse markers, we use an empirically learned lexical N-gram dictionary following (Joty et al., 2012). This approach has been shown to be more robust and flexible across domains (Biran and Rambow, 2011; Hernault et al., 2010). We also include part-of-speech (POS) tags for the beginning and end N tokens in a unit. 8 Organizational features Intra & Multi-Sentential Number of EDUs in unit 1 (or unit 2). Number of tokens in unit 1 (or unit 2). Distance of unit 1 in EDUs to the beginning (or to the end). Distance of unit 2 in EDUs to the beginning (or to the end). 4 Text structural features Multi-Sentential Number of sentences in unit 1 (or unit 2). Number of paragraphs in unit 1 (or unit 2). 8 N-gram features N∈{1, 2, 3} Intra & Multi-Sentential Beginning (or end) lexical N-grams in unit 1. Beginning (or end) lexical N-grams in unit 2. Beginning (or end) POS N-grams in unit 1. Beginning (or end) POS N-grams in unit 2. 5 Dominance set features Intra-Sentential Syntactic labels of the head node and the attachment node. Lexical heads of the head node and the attachment node. Dominance relationship between the two units. 8 Lexical chain features Multi-Sentential Number of chains start in unit 1 and end in unit 2. Number of chains start (or end) in unit 1 (or in unit 2). Number of chains skipping both unit 1 and unit 2. Number of chains skipping unit 1 (or unit 2). 2 Contextual features Intra & Multi-Sentential Previous and next feature vectors. 2 Substructure features Intra & Multi-Sentential Root nodes of the left and right rhetorical sub-trees. Table 1: Features used in our parsing models. Lexico-syntactic features dominance sets (Soricut and Marcu, 2003) are very effective for intra-sentential parsing. We include syntactic labels and lexical heads of head and attachment nodes along with their dominance relationship as features. Lexical chains (Morris and Hirst, 1991) are sequences of semantically related words that can indicate topic shifts. Features extracted from lexical chains have been shown to be useful for finding paragraph-level discourse structure (Sporleder and Lascarides, 2004). We compute lexical chains for a document following the approach proposed in (Galley and McKeown, 2003), that extracts lexical chains after performing word sense disambiguation. Following (Joty et al., 2012), we also encode contextual and rhetorical sub-structure features in our models. The rhetorical sub-structure features incorporate hierarchical dependencies between DT constituents. 4.4 Parsing Algorithm Given the probability of all possible DT constituents in the intra-sentential and multi-sentential scenarios, the job of the parsing algorithm is to find the most probable DT for that scenario. Following (Joty et al., 2012), we implement a probabilistic CKY-like bottom-up algorithm for computing the most likely parse using dynamic programming. Specifically, with n discourse units, we use the upper-triangular portion of the n×n dynamic programming table D. Given Ux(0) and Ux(1) are the start and end EDU Ids of unit Ux: D[i, j] = P(R[Ui(0), Uk(1), Uj(1)]) (3) where, k = argmax i≤p≤j P(R[Ui(0), Up(1), Uj(1)]). Note that, in contrast to previous studies on document-level parsing (Hernault et al., 2010; Subba and Di-Eugenio, 2009; Marcu, 2000b), which use a greedy algorithm, our approach finds a discourse tree that is globally optimal. 5 Document-level Parsing Approaches Now that we have presented our intra-sentential and our multi-sentential parsers, we are ready to describe how they can be effectively combined to perform document-level discourse analysis. Recall that a key motivation for a two-stage parsing is that it allows us to capture the correlation between text structure and discourse structure in a scalable, modular and flexible way. Below we describe two different approaches to model this correlation. 491 5.1 1S-1S (1 Sentence-1 Sub-tree) A key finding from several previous studies on sentence-level discourse analysis is that most sentences have a well-formed discourse sub-tree in the full document-level DT (Joty et al., 2012; Fisher and Roark, 2007). For example, Figure 7(a) shows 10 EDUs in 3 sentences (see boxes), where the DTs for the sentences obey their respective sentence boundaries. The 1S-1S approach aims to maximally exploit this finding. It first constructs a DT for every sentence using our intra-sentential parser, and then it provides our multi-sentential parser with the sentence-level DTs to build the rhetorical parse for the whole document. 1 2 3 S 1 8 9 10 S 3 4 5 6 7 S 2 1 2 3 S 1 8 9 10 S 3 4 5 6 7 S 2 (a) (b) ? ? ? Figure 7: Two possible DTs for three sentences. 5.2 Sliding Window While the assumption made by 1S-1S clearly simplifies the parsing process, it totally ignores the cases where discourse structures violate sentence boundaries. For example, in the DT shown in Figure 7(b), sentence S2 does not have a well-formed sub-tree because some of its units attach to the left (4-5, 6) and some to the right (7). Vliet and Redeker (2011) call these cases as ‘leaky’ boundaries. Even though less than 5% of the sentences have leaky boundaries in RST-DT, in other corpora this can be true for a larger portion of the sentences. For example, we observe over 12% sentences with leaky boundaries in the Instructional corpus of (Subba and Di-Eugenio, 2009). However, we notice that in most cases where discourse structures violate sentence boundaries, its units are merged with the units of its adjacent sentences, as in Figure 7(b). For example, this is true for 75% cases in our development set containing 20 news articles from RST-DT and for 79% cases in our development set containing 20 how-to-do manuals from the Instructional corpus. Based on this observation, we propose a sliding window approach. In this approach, our intra-sentential parser works with a window of two consecutive sentences, and builds a DT for the two sentences. For example, given the three sentences in Figure 7, our intra-sentential parser constructs a DT for S1-S2 and a DT for S2-S3. In this process, each sentence in a document except the first and the last will be associated with two DTs: one with the previous sentence (say DTp) and one with the next (say DTn). In other words, for each non-boundary sentence, we will have two decisions: one from DTp and one from DTn. Our parser consolidates the two decisions and generates one or more sub-trees for each sentence by checking the following three mutually exclusive conditions one after another: • Same in both: If the sentence has the same (in terms of both structure and labels) well-formed sub-tree in both DTp and DTn, we take this subtree for the sentence. For example, in Figure 8(a), S2 has the same sub-tree in the two DTs, i.e. a DT for S1-S2 and a DT for S2-S3. The two decisions agree on the DT for the sentence. • Different but no cross: If the sentence has a well-formed sub-tree in both DTp and DTn, but the two sub-trees vary either in structure or in labels, we pick the most probable one. For example, consider the DT for S1-S2 in Figure 8(a) and the DT for S2-S3 in Figure 8(b). In both cases S2 has a well-formed sub-tree, but they differ in structure. We pick the sub-tree which has the higher probability in the two dynamic programming tables. 1 2 3 S1 8 9 10 S 3 4 5 6 7 S2 1 2 3 S1 8 9 10 S 3 4 5 6 7 S2 (a) (c) 8 9 10 S 3 4 5 6 7 S2 (b) 4 5 6 7 S 2 (i) (ii) 4 5 6 7 S2 Figure 8: Extracting sub-trees for S2. • Cross: If either or both of DTp and DTn segment the sentence into multiple sub-trees, we pick the one with more sub-trees. For example, consider the two DTs in Figure 8(c). In the DT for S1-S2, S2 has three sub-trees (4-5,6,7), whereas in the DT for S2-S3, it has two (4-6,7). So, we extract the three sub-trees for S2 from the first DT. If the sentence has the same number of sub-trees in both DTp and DTn, we pick the one with higher probability in the dynamic programming tables. At the end, the multi-sentential parser takes all these sentence-level sub-trees for a document, and builds a full rhetorical parse for the document. 492 6 Experiments 6.1 Corpora While previous studies on document-level parsing only report their results on a particular corpus, to show the generality of our method, we experiment with texts from two very different genres. Our first corpus is the standard RST-DT (Carlson et al., 2002), which consists of 385 Wall Street Journal articles, and is partitioned into a training set of 347 documents and a test set of 38 documents. 53 documents, selected from both sets were annotated by two annotators, based on which we measure human agreement. In RST-DT, the original 25 rhetorical relations defined by (Mann and Thompson, 1988) are further divided into a set of 18 coarser relation classes with 78 finer-grained relations. Our second corpus is the Instructional corpus prepared by (Subba and Di-Eugenio, 2009), which contains 176 how-to-do manuals on homerepair. The corpus was annotated with 26 informational relations (e.g., Preparation-Act, Act-Goal). 6.2 Experimental Setup We experiment with our discourse parser on the two datasets using our two different parsing approaches, namely 1S-1S and the sliding window. We compare our approach with HILDA (Hernault et al., 2010) on RST-DT, and with the ILP-based approach of (Subba and Di-Eugenio, 2009) on the Instructional corpus, since they are the state-ofthe-art on the respective genres. On RST-DT, the standard split was used for training and testing purposes. The results for HILDA were obtained by running the system with default settings on the same inputs we provided to our system. Since we could not run the ILP-based system of (Subba and Di-Eugenio, 2009) (not publicly available) on the Instructional corpus, we report the performances presented in their paper. They used 151 documents for training and 25 documents for testing. Since we did not have access to their particular split, we took 5 random samples of 151 documents for training and 25 documents for testing, and report the average performance over the 5 test sets. To evaluate the parsing performance, we use the standard unlabeled (i.e., hierarchical spans) and labeled (i.e., nuclearity and relation) precision, recall and F-score as described in (Marcu, 2000b). To compare with previous studies, our experiments on RST-DT use the 18 coarser relations. After attaching the nuclearity statuses (NS, SN, NN) to these relations, we get 41 distinct relations. Following (Subba and Di-Eugenio, 2009) on the Instructional corpus, we use 26 relations, and treat the reversals of non-commutative relations as separate relations. That is, Goal-Act and Act-Goal are considered as two different relations. Attaching the nuclearity statuses to these relations gives 76 distinct relations. Analogous to previous studies, we map the n-ary relations (e.g., Joint) into nested right-branching binary relations. 6.3 Results and Error Analysis Table 2 presents F-score parsing results for our parsers and the existing systems on the two corpora.2 On both corpora, our parser, namely, 1S-1S (TSP 1-1) and sliding window (TSP SW), outperform existing systems by a wide margin (p<7.1e05).3 On RST-DT, our parsers achieve absolute F-score improvements of 8%, 9.4% and 11.4% in span, nuclearity and relation, respectively, over HILDA. This represents relative error reductions of 32%, 23% and 21% in span, nuclearity and relation, respectively. Our results are also close to the upper bound, i.e. human agreement on this corpus. On the Instructional genre, our parsers deliver absolute F-score improvements of 10.5%, 13.6% and 8.14% in span, nuclearity and relations, respectively, over the ILP-based approach. Our parsers, therefore, reduce errors by 36%, 27% and 13% in span, nuclearity and relations, respectively. If we compare the performance of our parsers on the two corpora, we observe higher results on RST-DT. This can be explained in at least two ways. First, the Instructional corpus has a smaller amount of data with a larger set of relations (76 when nuclearity attached). Second, some frequent relations are (semantically) very similar (e.g., Preparation-Act, Step1-Step2), which makes it difficult even for the human annotators to distinguish them (Subba and Di-Eugenio, 2009). Comparison between our two models reveals that TSP SW significantly outperforms TSP 1-1 only in finding the right structure on both corpora (p<0.01). Not surprisingly, the improvement is higher on the Instructional corpus. A likely explanation is that the Instructional corpus contains more leaky boundaries (12%), allowing the sliding 2Precision, Recall and F-score are the same when manual segmentation is used (see Marcu, (2000b), page 143). 3Since we did not have access to the output or to the system of (Subba and Di-Eugenio, 2009), we were not able to perform a significance test on the Instructional corpus. 493 RST-DT Instructional Metrics HILDA TSP 1-1 TSP SW Human ILP TSP 1-1 TSP SW Span 74.68 82.47* 82.74*† 88.70 70.35 79.67 80.88† Nuclearity 58.99 68.43* 68.40* 77.72 49.47 63.03 63.10 Relation 44.32 55.73* 55.71* 65.75 35.44 43.52 43.58 Table 2: Parsing results of different models using manual (gold) segmentation. Performances significantly superior to HILDA (with p<7.1e-05) are denoted by *. Significant differences between TSP 1-1 and TSP SW (with p<0.01) are denoted by †. T-C T-O T-CM M-M CMP EV SU CND EN CA TE EX BA CO JO S-U AT EL T-C T-O T-CM M-M CMP EV SU CND EN CA TE EX BA CO JO S-U AT EL 1 0 7 3 2 11 12 2 7 11 4 12 12 9 13 0 9 359 0 0 0 1 0 2 0 3 1 3 3 3 5 0 0 1 272 2 0 0 0 0 1 0 0 0 0 0 0 0 1 0 1 85 3 2 0 0 2 0 1 2 1 0 0 7 9 3 6 7 57 1 0 8 0 0 0 0 0 0 0 3 0 2 1 1 2 33 2 0 0 0 0 0 0 1 3 0 0 1 0 2 9 0 19 2 1 0 0 1 0 0 0 1 3 2 0 0 0 4 1 12 1 2 1 0 0 8 0 0 0 0 0 0 0 0 0 0 7 0 4 3 1 0 0 1 0 0 0 0 1 0 0 0 1 3 0 5 1 1 1 0 0 6 0 0 0 0 0 0 0 0 24 2 2 1 0 0 0 0 0 14 0 0 0 0 0 0 8 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 22 1 0 1 0 1 2 2 0 1 0 0 0 0 10 1 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 4 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Figure 9: Confusion matrix for relation labels on the RST-DT test set. Y-axis represents true and X-axis represents predicted relations. The relations are Topic-Change (T-C), Topic-Comment (T-CM), Textual Organization (TO), Manner-Means (M-M), Comparison (CMP), Evaluation (EV), Summary (SU), Condition (CND), Enablement (EN), Cause (CA), Temporal (TE), Explanation (EX), Background (BA), Contrast (CO), Joint (JO), Same-Unit (S-U), Attribution (AT) and Elaboration (EL). window approach to be more effective in finding those, without inducing much noise for the labels. This clearly demonstrates the potential of TSP SW for datasets with even more leaky boundaries e.g., the Dutch (Vliet and Redeker, 2011) and the German Potsdam (Stede, 2004) corpora. Error analysis reveals that although TSP SW finds more correct structures, a corresponding improvement in labeling relations is not present because in a few cases, it tends to induce noise from the neighboring sentences for the labels. For example, when parsing was performed on the first sentence in Figure 1 in isolation using 1S-1S, our parser rightly identifies the Contrast relation between EDUs 2 and 3. But, when it is considered with its neighboring sentences by the sliding window, the parser labels it as Elaboration. A promising strategy to deal with this and similar problems that we plan to explore in future, is to apply both approaches to each sentence and combine them by consolidating three probabilistic decisions, i.e. the one from 1S-1S and the two from sliding window. To further analyze the errors made by our parser on the hardest task of relation labeling, Figure 9 presents the confusion matrix for TSP 1-1 on the RST-DT test set. The relation labels are ordered according to their frequency in the RST-DT training set. In general, the errors are produced by two different causes acting together: (i) imbalanced distribution of the relations, and (ii) semantic similarity between the relations. The most frequent relation Elaboration tends to mislead others especially, the ones which are semantically similar (e.g., Explanation, Background) and less frequent (e.g., Summary, Evaluation). The relations which are semantically similar mislead each other (e.g., Temporal:Background, Cause:Explanation). These observations suggest two ways to improve our parser. We would like to employ a more robust method (e.g., ensemble methods with bagging) to deal with the imbalanced distribution of relations, along with taking advantage of a richer semantic knowledge (e.g., compositional semantics) to cope with the errors caused by semantic similarity between the rhetorical relations. 7 Conclusion In this paper, we have presented a novel discourse parser that applies an optimal parsing algorithm to probabilities inferred from two CRF models: one for intra-sentential parsing and the other for multi-sentential parsing. The two models exploit their own informative feature sets and the distributional variations of the relations in the two parsing conditions. We have also presented two novel approaches to combine them effectively. Empirical evaluations on two different genres demonstrate that our approach yields substantial improvement over existing methods in discourse parsing. Acknowledgments We are grateful to Frank Tompa and the anonymous reviewers for their comments, and the NSERC BIN and CGS-D for financial support. 494 References O. Biran and O. Rambow. 2011. Identifying Justifications in Written Dialogs by Classifying Text as Argumentative. International Journal of Semantic Computing, 5(4):363–381. L. Carlson, D. Marcu, and M. Okurowski. 2002. RST Discourse Treebank (RST-DT) LDC2002T07. Linguistic Data Consortium, Philadelphia. V. Feng and G. Hirst. 2012. Text-level Discourse Parsing with Rich Linguistic Features. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, ACL ’12, pages 60–68, Jeju Island, Korea. Association for Computational Linguistics. S. Fisher and B. Roark. 2007. The Utility of Parsederived Features for Automatic Discourse Segmentation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics, ACL ’07, pages 488–495, Prague, Czech Republic. Association for Computational Linguistics. M. Galley and K. McKeown. 2003. Improving Word Sense Disambiguation in Lexical Chaining. In Proceedings of the 18th International Joint Conference on Artificial Intelligence, IJCAI ’07, pages 1486– 1488, Acapulco, Mexico. H. Hernault, H. Prendinger, D. duVerle, and M. Ishizuka. 2010. HILDA: A Discourse Parser Using Support Vector Machine Classification. Dialogue and Discourse, 1(3):1–33. S. Joty, G. Carenini, and R. T. Ng. 2012. A Novel Discriminative Framework for Sentence-Level Discourse Analysis. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, EMNLP-CoNLL ’12, pages 904– 915, Jeju Island, Korea. Association for Computational Linguistics. H. LeThanh, G. Abeysinghe, and C. Huyck. 2004. Generating Discourse Structures for Written Texts. In Proceedings of the 20th international conference on Computational Linguistics, COLING ’04, Geneva, Switzerland. Association for Computational Linguistics. A. Louis, A. Joshi, and A. Nenkova. 2010. Discourse Indicators for Content Selection in Summarization. In Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue, SIGDIAL ’10, pages 147–156, Tokyo, Japan. Association for Computational Linguistics. W. Mann and S. Thompson. 1988. Rhetorical Structure Theory: Toward a Functional Theory of Text Organization. Text, 8(3):243–281. D. Marcu and A. Echihabi. 2002. An Unsupervised Approach to Recognizing Discourse Relations. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL ’02, pages 368–375. Association for Computational Linguistics. D. Marcu. 2000a. The Rhetorical Parsing of Unrestricted Texts: A Surface-based Approach. Computational Linguistics, 26:395–448. D. Marcu. 2000b. The Theory and Practice of Discourse Parsing and Summarization. MIT Press, Cambridge, MA, USA. J. Morris and G. Hirst. 1991. Lexical Cohesion Computed by Thesaural Relations as an Indicator of Structure of Text. Computational Linguistics, 17(1):21–48. R. Prasad, A. Joshi, N. Dinesh, A. Lee, E. Miltsakaki, and B. Webber. 2005. The Penn Discourse TreeBank as a Resource for Natural Language Generation. In Proceedings of the Corpus Linguistics Workshop on Using Corpora for Natural Language Generation, pages 25–32, Birmingham, U.K. S. Somasundaran, 2010. Discourse-Level Relations for Opinion Analysis. PhD thesis, University of Pittsburgh. R. Soricut and D. Marcu. 2003. Sentence Level Discourse Parsing Using Syntactic and Lexical Information. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, NAACL-HLT ’03, pages 149– 156, Edmonton, Canada. Association for Computational Linguistics. C. Sporleder and M. Lapata. 2005. Discourse Chunking and its Application to Sentence Compression. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, pages 257–264, Vancouver, British Columbia, Canada. Association for Computational Linguistics. C. Sporleder and A. Lascarides. 2004. Combining Hierarchical Clustering and Machine Learning to Predict High-Level Discourse Structure. In Proceedings of the 20th international conference on Computational Linguistics, COLING ’04, Geneva, Switzerland. Association for Computational Linguistics. M. Stede. 2004. The Potsdam Commentary Corpus. In Proceedings of the ACL-04 Workshop on Discourse Annotation, Barcelona. Association for Computational Linguistics. R. Subba and B. Di-Eugenio. 2009. An Effective Discourse Parser that Uses Rich Linguistic Information. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT-NAACL ’09, pages 566–574, Boulder, Colorado. Association for Computational Linguistics. 495 C. Sutton and A. McCallum. 2012. An Introduction to Conditional Random Fields. Foundations and Trends in Machine Learning, 4(4):267–373. C. Sutton, A. McCallum, and K. Rohanimanesh. 2007. Dynamic Conditional Random Fields: Factorized Probabilistic Models for Labeling and Segmenting Sequence Data. Journal of Machine Learning Research (JMLR), 8:693–723. S. Verberne, L. Boves, N. Oostdijk, and P. Coppen. 2007. Evaluating Discourse-based Answer Extraction for Why-question Answering. In Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval, pages 735–736, Amsterdam, The Netherlands. ACM. N. Vliet and G. Redeker. 2011. Complex Sentences as Leaky Units in Discourse Parsing. In Proceedings of Constraints in Discourse, Agay-Saint Raphael, September. 496
2013
48
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 497–506, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Improving pairwise coreference models through feature space hierarchy learning Emmanuel Lassalle Alpage Project-team INRIA & Univ. Paris Diderot Sorbonne Paris Cit´e, F-75205 Paris [email protected] Pascal Denis Magnet Project INRIA Lille - Nord Europe Avenue Helo¨ıse, 59650 Villeneuve d’Ascq [email protected] Abstract This paper proposes a new method for significantly improving the performance of pairwise coreference models. Given a set of indicators, our method learns how to best separate types of mention pairs into equivalence classes for which we construct distinct classification models. In effect, our approach finds an optimal feature space (derived from a base feature set and indicator set) for discriminating coreferential mention pairs. Although our approach explores a very large space of possible feature spaces, it remains tractable by exploiting the structure of the hierarchies built from the indicators. Our experiments on the CoNLL-2012 Shared Task English datasets (gold mentions) indicate that our method is robust relative to different clustering strategies and evaluation metrics, showing large and consistent improvements over a single pairwise model using the same base features. Our best system obtains a competitive 67.2 of average F1 over MUC, B3, and CEAF which, despite its simplicity, places it above the mean score of other systems on these datasets. 1 Introduction Coreference resolution is the problem of partitioning a sequence of noun phrases (or mentions), as they occur in a natural language text, into a set of referential entities. A common approach to this problem is to separate it into two modules: on the one hand, one defines a model for evaluating coreference links, in general a discriminative classifier that detects coreferential mention pairs. On the other hand, one designs a method for grouping the detected links into a coherent global output (i.e. a partition over the set of entity mentions). This second step is typically achieved using greedy heuristics (McCarthy and Lehnert, 1995; Soon et al., 2001; Ng and Cardie, 2002; Bengston and Roth, 2008), although more sophisticated clustering approaches have been used, too, such as cutting graph methods (Nicolae and Nicolae, 2006; Cai and Strube, 2010) and Integer Linear Programming (ILP) formulations (Klenner, 2007; Denis and Baldridge, 2009). Despite its simplicity, this two-step strategy remains competitive even when compared to more complex models utilizing a global loss (Bengston and Roth, 2008). In this kind of architecture, the performance of the entire coreference system strongly depends on the quality of the local pairwise classifier.1 Consequently, a lot of research effort on coreference resolution has focused on trying to boost the performance of the pairwise classifier. Numerous studies are concerned with feature extraction, typically trying to enrich the classifier with more linguistic knowledge and/or more world knowledge (Ng and Cardie, 2002; Kehler et al., 2004; Ponzetto and Strube, 2006; Bengston and Roth, 2008; Versley et al., 2008; Uryupina et al., 2011). A second line of work explores the use of distinct local models for different types of mentions, specifically for different types of anaphoric mentions based on their grammatical categories (such as pronouns, proper names, definite descriptions) (Morton, 2000; Ng, 2005; Denis and Baldridge, 2008).2 An important justification for such spe1There are however no theoretical guarantees that improving pair classification will always result in overall improvements if the two modules are optimized independently. 2Sometimes, distinct sample selections are also adopted 497 cialized models is (psycho-)linguistic and comes from theoretical findings based on salience or accessibility (Ariel, 1988). It is worth noting that, from a machine learning point of view, this is related to feature extraction in that both approaches in effect recast the pairwise classification problem in higher dimensional feature spaces. In this paper, we claim that mention pairs should not be processed by a single classifier, and instead should be handled through specific models. But we are furthermore interested in learning how to construct and select such differential models. Our argument is therefore based on statistical considerations, rather than on purely linguistic ones3. The main question we raise is, given a set of indicators (such as grammatical types, distance between two mentions, or named entity types), how to best partition the pool of mention pair examples in order to best discriminate coreferential pairs from non coreferential ones. In effect, we want to learn the “best” subspaces for our different models: that is, subspaces that are neither too coarse (i.e., unlikely to separate the data well) nor too specific (i.e., prone to data sparseness and noise). We will see that this is also equivalent to selecting a single large adequate feature space by using the data. Our approach generalizes earlier approaches in important ways. For one thing, the definition of the different models is no longer restricted to grammatical typing (our model allows for various other types of indicators) or to the sole typing of the anaphoric mention (our models can also be specific to a particular type antecedent or to the two types of the mention pair). More importantly, we propose an original method for learning the best set of models that can be built from a given set of indicators and a training set. These models are organized in a hierarchy, wherein each leaf corresponds to a mutually disjoint subset of mention pair examples and the classifier that can be trained from it. Our models are trained using the Online Passive-Aggressive algorithm or PA (Crammer et al., 2006), a large margin version of the perceptron. Our method is exact in that it explores the full space of hierarchies (of size at least 22n) definable on an indicator sequence, while remaining scalable by exploiting the particular structure of these during the training of the distinct local models (Ng and Cardie, 2002; Uryupina, 2004). 3However it should be underlined that the statistical viewpoint is complementary to the linguistic work. hierarchies with dynamic programming. This approach also performs well, and it largely outperforms the single model. As will be shown based on a variety of experiments on the CoNLL-2012 Shared Task English datasets, these improvements are consistent across different evaluation metrics and for the most part independent of the clustering decoder that was used. The rest of this paper is organized as follows. Section 2 discusses the underlying statistical hypotheses of the standard pairwise model and defines a simple alternative framework that uses a simple separation of mention pairs based on grammatical types. Next, in section 3, we generalize the method by introducing indicator hierarchies and explain how to learn the best models associated with them. Section 4 provides a brief system description and Section 5 evaluates the various models on CoNLL-2012 English datasets. 2 Modeling pairs Pairwise models basically employ one local classifier to decide whether two mentions are coreferential or not. When using machine learning techniques, this involves certain assumptions about the statistical behavior of mention pairs. 2.1 Statistical assumptions Let us adopt a probabilistic point of view to describe the prototype of pairwise models. Given a document, the number of mentions is fixed and each pair of mentions follows a certain distribution (that we partly observe in a feature space). The basic idea of pairwise models is to consider mention pairs independently from each other (that is why a decoder is necessary to enforce transitivity). If we use a single classifier to process all pairs, then they are supposed to be identically distributed. We claim that pairs should not be processed by a single classifier because they are not identically distributed (or a least the distribution is too complex for the classifier); rather, we should separate different “types” on pairs and create a specific model for each of them. Separating different kinds of pairs and handling them with different specific models can lead to more accurate global models. For instance, some coreference resolution systems process different kinds of anaphors separately, which suggests for example that pairs containing an anaphoric pronoun behave differently from pairs with non498 pronominal anaphors. One could rely on a rich set of features to capture complex distributions, but here we actually have a rather limited set of elementary features (see section 4) and, for instance, using products of features must be done carefully to avoid introducing noise in the model. Instead of imposing heuristic product of features, we will show that a clever separation of instances leads to significant improvements of the pairwise model. 2.2 Feature spaces 2.2.1 Definitions We first introduce the problem more formally. Every pair of mentions mi and mj is modeled by a random variable: Pij : Ω → X × Y ω 7→ (xij(ω), yij(ω)) where Ωclassically represents randomness, X is the space of objects (“mention pairs”) that is not directly observable and yij(ω) ∈Y = {+1, −1} are the labels indicating whether mi and mj are coreferential or not. To lighten the notations, we will not always write the index ij. Now we define a mapping: φF : X → F x 7→ x that casts pairs into a feature space F through which we observe them. For us, F is simply a vector space over R (in our case many features are Boolean; they are cast into R as 0 and 1). For technical coherence, we assume that φF1(x(ω)) and φF2(x(ω)) have the same values when projected on the feature space F1 ∩F2: it means that common features from two feature spaces have the same values. From this formal point of view, the task of coreference resolution consists in fixing φF, observing labeled samples {(φF(x), y)t}t∈TrainSet and, given partially observed new variables {(φF(x))t}t∈TestSet, recovering the corresponding values of y. 2.2.2 Formalizing the statistical assumptions We claimed before that all mention pairs seemed not to be identically distributed since, for example, pronouns do not behave like nominals. We can formulate this more rigorously: since the object space X is not directly observable, we do not know its complexity. In particular, when using a mapping to a too small feature space, the classifier cannot capture the distribution very well: the data is too noisy. Now if we say that pronominal anaphora do not behave like other anaphora, we distinguish two kinds of pair i.e. we state that the distribution of pairs in X is a mixture of two distributions, and we deterministically separate pairs to their specific distribution part. In this way, we may separate positive and negative pairs more easily if we cast each kind of pair into a specific feature space. Let us call these feature spaces F1 and F2. We can either create two independent classifiers on F1 and F2 to process each kind of pair or define a single model on a larger feature space F = F1 ⊕F2. If the model is linear (which is our case), these approaches happen to be equivalent. So we can actually assume that the random variables Pij are identically distributed, but drawn from a complex mixture. A new issue arises: we need to find a mapping φF that renders the best view on the distribution of the data. From a theoretical viewpoint, the higher the dimension of the feature space (imagine taking the direct sum of all feature spaces), the more we get details on the distribution of mention pairs and the more we can expect to separate positives and negatives accurately. In practice, we have to cope with data sparsity: there will not be enough data to properly train a linear model on such a space. Finally, we seek a feature space situated between the two extremes of a space that is too big (sparseness) or too small (noisy data). The core of this work is to define a general method for choosing the most adequate space F among a huge number of possibilities when we do not know a priori which is the best. 2.2.3 Linear models In this work, we try to linearly separate positive and negative instances in the large space F with the Online Passive-Aggressive (PA) algorithm (Crammer et al., 2006): the model learns a parameter vector w that defines a hyperplane that cuts the space into two parts. The predicted class of a pair x with feature vector φF(x) is given by: CF(x) := sign(wT · φF(x)) Linearity implies an equivalence between: (i) separating instances of two types, t1 and t2, in two 499 independent models with respective feature spaces F1 and F2 and parameters w1 and w2, and (ii) a single model on F1⊕F2. To see why, let us define the map: φF1⊕F2(x) :=       φF1(x)T 0 T if x typed t1  0 φF2(x)T T if x typed t2 and the parameter vector w =  w1 w2  ∈F1 ⊕ F2. Then we have: CF1⊕F2(x) = ( CF1(x) if x typed t1 CF2(x) if x typed t2 Now we check that the same property applies when the PA fits its parameter w. For each new instance of the training set, the weight is updated according to the following rule4: wt+1 = arg min w∈F 1 2 ∥w −wt∥2 s.t. l(w; (xt, yt)) = 0 where l(w; (xt, yt)) = min(0, 1−yt(w·φF(xt))), so that when F = F1 ⊕F2, the minimum if x is typed t1 is wt+1 =  w1 t+1 w2 t  and if x is typed t2 is wt+1 =  w1 t w2 t+1  where the wi t+1 correspond to the updates in space Fi independently from the rest. This result can be extended easily to the case of n feature spaces. Thus, with a deterministic separation of the data, a large model can be learned using smaller independent models. 2.3 An example: separation by gramtype To motivate our approach, we first introduce a simple separation of mention pairs which creates 9 models obtained by considering all possible pairs of grammatical types {nominal, name, pronoun} for both mentions in the pair (a similar fine-grained separation can be found in (Chen et al., 2011)). This is equivalent to using 9 different feature spaces F1, . . . , F9 to capture the global distribution of pairs. With the PA, this is also a single model with feature space F = F1 ⊕· · · ⊕F9. We will call it the GRAMTYPE model. As we will see in Section 5, these separated models significantly outperform a single model 4The parameter is updated to obtain a margin of a least 1. It does not change if the instance is already correctly classified with such margin. that uses the same base feature set. But we would like to define a method that adapts a feature space to the data by choosing the most adequate separation of pairs. 3 Hierarchizing feature spaces In this section, we have to keep in mind that separating the pairs in different models is the same as building a large feature space in which the parameter w can be learned by parts in independent subspaces. 3.1 Indicators on pairs For establishing a structure on feature spaces, we use indicators which are deterministic functions on mention pairs with a small number of outputs. Indicators classify pairs in predefined categories in one-to-one correspondence with independent feature spaces. We can reuse some features of the system as indicators, e.g. the grammatical or named entity types. We can also employ functions that are not used as features, e.g. the approximate position of one of the mentions in the text. The small number of outputs of an indicator is required for practical reasons: if a category of pairs is too refined, the associated feature space will suffer from data sparsity. Accordingly, distance-based indicators must be approximated by coarse histograms. In our experiments the outputs never exceeded a dozen values. One way to reduce the output span of an indicator is to binarize it like binarizing a tree (many possible binarizations). This operation produces a hierarchy of indicators which is exactly the structure we exploit in what follows. 3.2 Hierarchies for separating pairs We define hierarchies as combinations of indicators creating finer categories of mention pairs: given a finite sequence of indicators, a mention pair is classified by applying the indicators successively, each time refining a category into subcategories, just like in a decision tree (each node having the same number of children as the number of outputs of its indicator). We allow the classification to stop before applying the last indicator, but the behavior must be the same for all the instances. So a hierarchy is basically a sub-tree of the complete decision tree that contains copies of the same indicator at each level. If all the leaves of the decision tree have the 500 same depth, this corresponds to taking the Cartesian product of outputs of all indicators for indexing the categories. In that case, we refer to product-hierarchies. The GRAMTYPE model can be seen as a two level product-hierarchy (figure 1). Figure 1: GRAMTYPE seen as a product-hierarchy Product-hierarchies will be the starting point of our method to find a feature space that fits the data. Now choosing a relevant sequence of indicators should be achieved through linguistic intuitions and theoretical work (gramtype separation is one of them). The system will find by itself the best usage of the indicators when optimizing the hierarchy. The sequence is a parameter of the model. 3.3 Relation with feature spaces Like we did for the GRAMTYPE model, we associate a feature space Fi to each leaf of a hierarchy. Likewise, the sum F = L i Fi defines a large feature space. The corresponding parameter w of the model can be obtained by learning the wi in Fi. Given a sequence of indicators, the number of different hierarchies we can define is equal to the number of sub-trees of the complete decision tree (each non-leaf node having all its children). The minimal case is when all indicators are Boolean. The number of full binary trees of height at most n can be computed by the following recursion: T(1) = 1 and T(n + 1) = 1 + T(n)2. So T(n) ≥22n: even with small values of n, the number of different hierarchies (or large feature spaces) definable with a sequence of indicators is gigantic (e.g. T(10) ≈3.8.1090). Among all the possibilities for a large feature space, many are irrelevant because for them the data is too sparse or too noisy in some subspaces. We need a general method for finding an adequate space without enumerating and testing each of them. 3.4 Optimizing hierarchies Let us assume now that the sequence of indicators is fixed, and let n be its length. To find the best feature space among a very high number of possibilities, we need a criterion we can apply without too much additional computation. For that we only evaluate the feature space locally on pairs, i.e. without applying a decoder on the output. We employ 3 measures on pairwise classification results: precision, recall and F1-score. Now selecting the best space for one of these measures can be achieved by using dynamic programming techniques. In the rest of the paper, we will optimize the F1-score. Training the hierarchy Starting from the product-hierarchy, we associate a classifier and its proper feature space to each node of the tree5. The classifiers are then trained as follows: for each instance there is a unique path from the root to a leaf of the complete tree. Each classifier situated on the path is updated with this instance. The number of iterations of the Passive-Aggressive is fixed. Computing scores After training, we test all the classifiers on another set of pairs6. Again, a classifier is tested on an instance only if it is situated on the path from the root to the leaf associated with the instance. We obtain TP/FP/FN numbers7 on pair classifications that are sufficient to compute the F1-score. As for training, the data on which a classifier at a given node is evaluated is the same as the union of all data used to evaluate the classifiers corresponding to the children of this node. Thus we are able to compare the scores obtained at a node to the “union of the scores” obtained at its children. Cutting down the hierarchy For the moment we have a complete tree with a classifier at each node. We use a dynamic programming technique to compute the best hierarchy by cutting this tree and only keeping classifiers situated at the leaf. The algorithm assembles the best local models (or feature spaces) together to create larger models. It goes from the leaves to the root and cuts the subtree starting at a node whenever it does not pro5In the experiments, the classifiers use a copy of a same feature space, but not the same data, which corresponds to crossing the features with the categories of the decision tree. 6The training set is cut into two parts, for training and testing the hierarchy. We used 10-fold cross-validation in our experiments. 7True positives, false positives and false negatives. 501 vide a better score than the node itself, or on the contrary propagates the score of the sub-tree when there is an improvement. The details are given in algorithm 1. list ←list of nodes given by a breadth-first 1 search for node in reversed list do if node.children ̸= ∅then 2 if sum-score(node.children) > 3 node.score then node.TP/FP/FN ← 4 sum-num(node.children) else 5 node.children ←∅ 6 end 7 end 8 end 9 Algorithm 1: Cutting down a hierarchy Let us briefly discuss the correctness and complexity of the algorithm. Each node is seen two times so the time complexity is linear in the number of nodes which is at least O(2n). However, only nodes that have encountered at least one training instance are useful and there are O(n × k) such nodes (where k the size of the training set). So we can optimize the algorithm to run in time O(n × k)8. If we scan the list obtained by breadth-first search backwards, we are ensured that every node will be processed after its children. (node.children) is the set of children of node, and (node.score) its score. sum-num provides TP/FP/FN by simply adding those of the children and sum-score computes the score based on these new TP/FP/FN numbers. (line 6) cuts the children of a node when they are not used in the best score. The algorithm thus propagates the best scores from the leaves to the root which finally gives a single score corresponding to the best hierarchy. Only the leaves used to compute the best score are kept, and they define the best hierarchy. Relation between cutting and the global feature space We can see the operation of cutting as replacing a group of subspaces by a single subspace in the sum (see figure 2). So cutting down the product-hierarchy amounts to reducing the global initial feature space in an optimal way. 8In our experiments, cutting down the hierarchy was achieved very quickly, and the total training time was about five times longer than with a single model. Figure 2: Cutting down the hierarchy reduces the feature space To sum up, the whole procedure is equivalent to training more than O(2n) perceptrons simultaneously and selecting the best performing. 4 System description Our system consists in the pairwise model obtained by cutting a hierarchy (the PA with selected feature space) and using a greedy decoder to create clusters from the output. It is parametrized by the choice of the initial sequence of indicators. 4.1 The base features We used classical features that can be found in details in (Bengston and Roth, 2008) and (Rahman and Ng, 2011): grammatical type and subtype of mentions, string match and substring, apposition and copula, distance (number of separating mentions/sentences/words), gender/number match, synonymy/hypernym and animacy (using WordNet), family name (based on lists), named entity types, syntactic features (gold parse) and anaphoricity detection. 4.2 Indicators As indicators we used: left and right grammatical types and subtypes, entity types, a boolean indicating if the mentions are in the same sentence, and a very coarse histogram of distance in terms of sentences. We systematically included right gramtype and left gramtype in the sequences and added other indicators, producing sequences of different lengths. The parameter was optimized by document categories using a development set after decoding the output of the pairwise model. 4.3 Decoders We tested 3 classical greedy link selection strategies that form clusters from the classifier decision: Closest-First (merge mentions with their closest coreferent mention on the left) (Soon et al., 2001), 502 Best-first (merge mentions with the mention on the left having the highest positive score) (Ng and Cardie, 2002; Bengston and Roth, 2008), and Aggressive-Merge (transitive closure on positive pairs) (McCarthy and Lehnert, 1995). Each of these decoders is typically (although not always) used in tandem with a specific sampling selection at training. Thus, Closest-First for instance is used in combination with a sample selection that generates training instances only for the mentions that occur between the closest antecedent and the anaphor (Soon et al., 2001). P R F1 SINGLE MODEL 22.28 63.50 32.99 RIGHT-TYPE 29.31 45.23 35.58 GRAMTYPE 39.12 45.83 42.21 BEST HIERARCHY 45.27 51.98 48.40 Table 1: Pairwise scores on CoNLL-2012 test. 5 Experiments 5.1 Data We evaluated the system on the English part of the corpus provided in the CoNLL-2012 Shared Task (Pradhan et al., 2012), referred to as CoNLL-2012 here. The corpus contains 7 categories of documents (over 2K documents, 1.3M words). We used the official train/dev/test data sets. We evaluated our system in the closed mode which requires that only provided data is used. 5.2 Settings Our baselines are a SINGLE MODEL, the GRAMTYPE model (section 2) and a RIGHT-TYPE model, defined as the first level of the gramtype product hierarchy (i.e. grammatical type of the anaphora (Morton, 2000)), with each greedy decoder and also the original sampling with a single model associated with those decoders. The hierarchies were trained with 10-fold crossvalidation on the training set (the hierarchies are cut after cumulating the scores obtained by crossvalidation) and their parameters are optimized by document category on the development set: the sequence of indicators obtaining the best average score after decoding was selected as parameter for the category. The obtained hierarchy is referred to as the BEST HIERARCHY in the results. We fixed the number of iterations for the PA for all models. In our experiments, we consider only the gold mentions. This is a rather idealized setting but our focus is on comparing various pairwise local models rather than on building a full coreference resolution system. Also, we wanted to avoid having to consider too many parameters in our experiments. 5.3 Evaluation metrics We use the three metrics that are most commonly used9, namely: MUC (Vilain et al., 1995) computes for each true entity cluster the number of system clusters that are needed to cover it. Precision is this quantity divided by the true cluster size minus one. Recall is obtained by reversing true and predicated clusters. F1 is the harmonic mean. B3 (Bagga and Baldwin, 1998) computes recall and precision scores for each mention, based on the intersection between the system/true clusters for that mention. Precision is the ratio of the intersection and the true cluster sizes, while recall is the ratio of the intersection to the system cluster sizes. Global recall, precision, and F1 scores are obtained by averaging over the mention scores. CEAF (Luo, 2005) scores are obtained by computing the best one-to-one mapping between the system/true partitions, which is equivalent to finding the best optimal alignment in the bipartite graph formed out of these partitions. We use the φ4 similarity function from (Luo, 2005). These metrics were recently used in the CoNLL2011 and -2012 Shared Tasks. In addition, these campaigns use an unweighted average over the F1 scores given by the three metrics. Following common practice, we use micro-averaging when reporting our scores for entire datasets. 5.4 Results The results obtained by the system are reported in table 2. The original sampling for the single model associated to Closest-First and Best-First decoder are referred to as SOON and NGCARDIE. The P/R/F1 pairwise scores before decoding are given in table 1. BEST HIERARCHY obtains a strong improvement in F1 (+15), a better precision and a less significant diminution of recall compared to GRAMTYPE and RIGHT-TYPE. 9BLANC metric (Recasens and Hovy, 2011) results are not reported since they are not used to compute the CoNLL2012 global score. However we can mention that in our experiments, using hierarchies had a positive effect similar to what was observed on B3 and CEAF. 503 MUC B3 CEAF Closest-First P R F1 P R F1 P R F1 Mean SOON 79.49 93.72 86.02 26.23 89.43 40.56 49.74 19.92 28.44 51.67 SINGLE MODEL 78.95 75.15 77.0 51.88 68.42 59.01 37.79 43.89 40.61 58.87 RIGHT-TYPE 79.36 67.57 72.99 69.43 56.78 62.47 41.17 61.66 49.37 61.61 GRAMTYPE 80.5 71.12 75.52 66.39 61.04 63.6 43.11 59.93 50.15 63.09 BEST HIERARCHY 83.23 73.72 78.19 73.5 67.09 70.15 47.3 60.89 53.24 67.19 MUC B3 CEAF Best-First P R F1 P R F1 P R F1 Mean NGCARDIE 81.02 93.82 86.95 23.33 93.92 37.37 40.31 18.97 25.8 50.04 SINGLE MODEL 79.22 73.75 76.39 40.93 75.48 53.08 30.52 37.59 33.69 54.39 RIGHT-TYPE 77.13 65.09 70.60 48.11 66.21 55.73 31.07 47.30 37.50 54.61 GRAMTYPE 77.21 65.89 71.1 49.77 67.19 57.18 32.08 47.83 38.41 55.56 BEST HIERARCHY 78.11 69.82 73.73 53.62 70.86 61.05 35.04 46.67 40.03 58.27 MUC B3 CEAF Aggressive-Merge P R F1 P R F1 P R F1 Mean SINGLE MODEL 83.15 88.65 85.81 35.67 88.18 50.79 36.3 28.27 31.78 56.13 RIGHT-TYPE 83.48 89.79 86.52 36.82 88.08 51.93 45.30 33.84 38.74 59.07 GRAMTYPE 83.12 84.27 83.69 44.73 81.58 57.78 45.02 42.94 43.95 61.81 BEST HIERARCHY 83.26 85.2 84.22 45.65 82.48 58.77 46.28 43.13 44.65 62.55 Table 2: CoNLL-2012 test (gold mentions): Closest-First, Best-First and Aggressive-Merge decoders. Despite the use of greedy decoders, we observe a large positive effect of pair separation in the pairwise models on the outputs. On the mean score, the use of distinct models versus a single model yields F1 increases from 6.4 up to 8.3 depending on the decoder. Irrespective of the decoder being used, GRAMTYPE always outperforms RIGHT-TYPE and single model and is always outperformed by BEST HIERARCHY model. Interestingly, we see that the increment in pairwise and global score are not proportional: for instance, the strong improvement of F1 between RIGHT-TYPE and GRAMTYPE results in a small amelioration of the global score. Depending on the document category, we found some variations as to which hierarchy was learned in each setting, but we noticed that parameters starting with right and left gramtypes often produced quite good hierarchies: for instance right gramtype →left gramtype →same sentence → right named entity type. We observed that product-hierarchies did not performed well without cutting (especially when using longer sequences of indicators, because of data sparsity) and could obtain scores lower than the single model. Hopefully, after cutting them the results always became better as the resulting hierarchy was more balanced. Looking at the different metrics, we notice that overall, pair separation improves B3 and CEAF (but not always MUC) after decoding the output: GRAMTYPE provides a better mean score than the single model, and BEST HIERARCHY gives the highest B3, CEAF and mean score. The best classifier-decoder combination reaches a score of 67.19, which would place it above the mean score (66.41) of the systems that took part in the CoNLL-2012 Shared Task (gold mentions track). Except for the first at 77.22, the best performing systems have a score around 68-69. Considering the simple decoding strategy we employed, our current system sets up a strong baseline. 6 Conclusion and perspectives In this paper, we described a method for selecting a feature space among a very large number of choices by using linearity and by combining indicators to separate the instances. We employed dynamic programming on hierarchies of indicators to compute the feature space providing the best pairwise classifications efficiently. We applied this 504 method to optimize the pairwise model of a coreference resolution system. Using different kinds of greedy decoders, we showed a significant improvement of the system. Our approach is flexible in that we can use a variety of indicators. In the future we will apply the hierarchies on finer feature spaces to make more accurate optimizations. Observing that the general method of cutting down hierarchies is not restricted to modeling mention pairs, but can be applied to problems having Boolean aspects, we aim at employing hierarchies to address other tasks in computational linguistics (e.g. anaphoricity detection or discourse and temporal relation classification wherein position information may help separating the data). In this work, we have only considered standard, heuristic linking strategies like Closest-First. So, a natural extension of this work is to combine our method for learning pairwise models with more sophisticated decoding strategies (like Bestcut or using ILP). Then we can test the impact of hierarchies with more realistic settings. Finally, the method for cutting hierarchies should be compared to more general but similar methods, for instance polynomial kernels for SVM and tree-based methods (Hastie et al., 2001). We also plan to extend our method by breaking the symmetry of our hierarchies. Instead of cutting product-hierarchies, we will employ usual techniques to build decision trees10 and apply our cutting method on their structure. The objective is twofold: first, we will get rid of the sequence of indicators as parameter. Second, we will avoid fragmentation or overfitting (which can arise with classification trees) by deriving an optimal large margin linear model from the tree structure. Acknowledgments We thank the ACL 2013 anonymous reviewers for their valuable comments. References M. Ariel. 1988. Referring and accessibility. Journal of Linguistics, pages 65–87. A. Bagga and B. Baldwin. 1998. Algorithms for scoring coreference chains. In Proceedings of LREC 1998, pages 563–566. 10(Bansal and Klein, 2012) show good performances of decision trees on coreference resolution. Mohit Bansal and Dan Klein. 2012. Coreference semantics from web features. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1, pages 389–398. Association for Computational Linguistics. Eric Bengston and Dan Roth. 2008. Understanding the value of features for coreference resolution. In Proceedings of EMNLP 2008, pages 294–303, Honolulu, Hawaii. Jie Cai and Michael Strube. 2010. End-to-end coreference resolution via hypergraph partitioning. In COLING, pages 143–151. Bin Chen, Jian Su, Sinno Jialin Pan, and Chew Lim Tan. 2011. A unified event coreference resolution by integrating multiple resolvers. In Proceedings of 5th International Joint Conference on Natural Language Processing, pages 102–110, Chiang Mai, Thailand, November. Asian Federation of Natural Language Processing. Koby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev-Shwartz, and Yoram Singer. 2006. Online passive-aggressive algorithms. Journal of Machine Learning Research, 7:551–585. Pascal Denis and Jason Baldridge. 2008. Specialized models and ranking for coreference resolution. In Proceedings of EMNLP 2008, pages 660–669, Honolulu, Hawaii. Pascal Denis and Jason Baldridge. 2009. Global joint models for coreference resolution and named entity classification. Procesamiento del Lenguaje Natural, 43. Trevor Hastie, Robert Tibshirani, and J. H. Friedman. 2001. The elements of statistical learning: data mining, inference, and prediction: with 200 fullcolor illustrations. New York: Springer-Verlag. A. Kehler, D. Appelt, L. Taylor, and A. Simma. 2004. The (non)utility of predicate-argument frequencies for pronoun interpretation. In Proceedings of HLTNAACL 2004. M. Klenner. 2007. Enforcing coherence on coreference sets. In Proceedings of RANLP 2007. X. Luo. 2005. On coreference resolution performance metrics. In Proceedings of HLT-NAACL 2005, pages 25–32. J. F. McCarthy and W. G. Lehnert. 1995. Using decision trees for coreference resolution. In IJCAI, pages 1050–1055. T. Morton. 2000. Coreference for NLP applications. In Proceedings of ACL 2000, Hong Kong. V. Ng and C. Cardie. 2002. Improving machine learning approaches to coreference resolution. In Proceedings of ACL 2002, pages 104–111. 505 V. Ng. 2005. Supervised ranking for pronoun resolution: Some recent improvements. In Proceedings of AAAI 2005. Cristina Nicolae and Gabriel Nicolae. 2006. Bestcut: A graph algorithm for coreference resolution. In EMNLP, pages 275–283. Simone Paolo Ponzetto and Michael Strube. 2006. Exploiting semantic role labeling, WordNet and Wikipedia for coreference resolution. In Proceedings of the HLT 2006, pages 192–199, New York City, N.Y. Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. Conll2012 shared task: Modeling multilingual unrestricted coreference in ontonotes. In Joint Conference on EMNLP and CoNLL - Shared Task, pages 1–40, Jeju Island, Korea, July. Association for Computational Linguistics. Altaf Rahman and Vincent Ng. 2011. Narrowing the modeling gap: a cluster-ranking approach to coreference resolution. J. Artif. Int. Res., 40(1):469–521. Recasens and Hovy. 2011. Blanc: Implementing the rand index for coreference evaluation. Natural Language Engineering, 17:485–510, 9. W. M. Soon, H. T. Ng, and D. Lim. 2001. A machine learning approach to coreference resolution of noun phrases. Computational Linguistics, 27(4):521–544. Olga Uryupina, Massimo Poesio, Claudio Giuliano, and Kateryna Tymoshenko. 2011. Disambiguation and filtering methods in using web knowledge for coreference resolution. In FLAIRS Conference. O. Uryupina. 2004. Linguistically motivated sample selection for coreference resolution. In Proceedings of DAARC 2004, Furnas. Yannick Versley, Alessandro Moschitti, Massimo Poesio, and Xiaofeng Yang. 2008. Coreference systems based on kernels methods. In COLING, pages 961– 968. M. Vilain, J. Burger, J. Aberdeen, D. Connolly, and L. Hirschman. 1995. A model-theoretic coreference scoring scheme. In Proceedings fo the 6th Message Understanding Conference (MUC-6), pages 45–52, San Mateo, CA. Morgan Kaufmann. 506
2013
49
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 43–52, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Smoothed marginal distribution constraints for language modeling Brian Roark†◦, Cyril Allauzen◦and Michael Riley◦ †Oregon Health & Science University, Portland, Oregon ◦Google, Inc., New York [email protected], {allauzen,riley}@google.com Abstract We present an algorithm for re-estimating parameters of backoff n-gram language models so as to preserve given marginal distributions, along the lines of wellknown Kneser-Ney (1995) smoothing. Unlike Kneser-Ney, our approach is designed to be applied to any given smoothed backoff model, including models that have already been heavily pruned. As a result, the algorithm avoids issues observed when pruning Kneser-Ney models (Siivola et al., 2007; Chelba et al., 2010), while retaining the benefits of such marginal distribution constraints. We present experimental results for heavily pruned backoff ngram models, and demonstrate perplexity and word error rate reductions when used with various baseline smoothing methods. An open-source version of the algorithm has been released as part of the OpenGrm ngram library.1 1 Introduction Smoothed n-gram language models are the defacto standard statistical models of language for a wide range of natural language applications, including speech recognition and machine translation. Such models are trained on large text corpora, by counting the frequency of n-gram collocations, then normalizing and smoothing (regularizing) the resulting multinomial distributions. Standard techniques store the observed n-grams and derive probabilities of unobserved n-grams via their longest observed suffix and “backoff” costs associated with the prefix histories of the unobserved suffixes. Hence the size of the model grows with the number of observed n-grams, which is very large for typical training corpora. 1www.opengrm.org Natural language applications, however, are commonly used in scenarios requiring relatively small footprint models. For example, applications running on mobile devices or in low latency streaming scenarios may be required to limit the complexity of models and algorithms to achieve the desired operating profile. As a result, statistical language models – an important component of many such applications – are often trained on very large corpora, then modified to fit within some pre-specified size bound. One method to achieve significant space reduction is through randomized data structures, such as Bloom (Talbot and Osborne, 2007) or Bloomier (Talbot and Brants, 2008) filters. These data structures permit efficient querying for specific n-grams in a model that has been stored in a fraction of the space required to store the full, exact model, though with some probability of false positives. Another common approach – which we pursue in this paper – is model pruning, whereby some number of the n-grams are removed from explicit storage in the model, so that their probability must be assigned via backoff smoothing. One simple pruning method is count thresholding, i.e., discarding n-grams that occur less than k times in the corpus. Beyond count thresholding, the most widely used pruning methods (Seymore and Rosenfeld, 1996; Stolcke, 1998) employ greedy algorithms to reduce the number of stored n-grams by comparing the stored probabilities to those that would be assigned via the backoff smoothing mechanism, and removing those with the least impact according to some criterion. While these greedy pruning methods are highly effective for models estimated with most common smoothing approaches, they have been shown to be far less effective with Kneser-Ney trained language models (Siivola et al., 2007; Chelba et al., 2010), leading to severe degradation in model quality relative to other standard smoothing meth43 4-gram models Backoff Interpolated Perplexity n-grams Perplexity n-grams Smoothing method full pruned (×1000) full pruned (×1000) Absolute Discounting (Ney et al., 1994) 120.5 197.3 383.4 119.8 198.1 386.2 Witten-Bell (Witten and Bell, 1991) 118.8 196.3 380.4 121.6 202.3 396.4 Ristad (1995) 126.4 203.6 395.6 ——- N/A ——Katz (1987) 119.8 198.1 386.2 ——- N/A ——Kneser-Ney (Kneser and Ney, 1995) 114.5 285.1 388.2 115.8 274.3 398.7 Mod. Kneser-Ney (Chen and Goodman, 1998) 116.3 280.6 396.2 112.8 270.7 399.1 Table 1: Reformatted version of Table 3 in Chelba et al. (2010), demonstrating perplexity degradation of Kneser-Ney smoothed models in contrast to other common smoothing methods. Data: English Broadcast News, 128M words training; 692K words test; 143K word vocabulary. 4-gram language models, pruned using Stolcke (1998) relative entropy pruning to approximately 1.3% of the original size of 31,095,260 n-grams. ods. Thus, while Kneser-Ney may be the preferred smoothing method for large, unpruned models – where it can achieve real improvements over other smoothing methods – when relatively sparse, pruned models are required, it has severely diminished utility. Table 1 presents a slightly reformatted version of Table 3 from Chelba et al. (2010). In their experiments (see Table 1 caption for specifics on training/test setup), they trained 4-gram Broadcast News language models using a variety of both backoff and interpolated smoothing methods and measured perplexity before and after Stolcke (1998) relative entropy based pruning. With this size training data, the perplexity of all of the smoothing methods other than Kneser-Ney degrades from around 120 with the full model to around 200 with the heavily pruned model. Kneser-Ney smoothed models have lower perplexity with the full model than the other methods by about 5 points, but degrade with pruning to far higher perplexity, between 270-285. The cause of this degradation is Kneser-Ney’s unique method for estimating smoothed language models, which will be presented in more detail in Section 3. Briefly, the smoothing method reestimates lower-order n-gram parameters in order to avoid over-estimating the likelihood of n-grams that already have ample probability mass allocated as part of higher-order n-grams. This is done via a marginal distribution constraint which requires the expected frequency of the lower-order n-grams to match their observed frequency in the training data, much as is commonly done for maximum entropy model training. Goodman (2001) proved that, under certain assumptions, such constraints can only improve language models. Lower-order n-gram parameters resulting from Kneser-Ney are not relative frequency estimates, as with other smoothing methods; rather they are parameters estimated specifically for use within the larger smoothed model. There are (at least) a couple of reasons why such parameters do not play well with model pruning. First, the pruning methods commonly use lower order n-gram probabilities to derive an estimate of state marginals, and, since these parameters are no longer smoothed relative frequency estimates, they do not serve that purpose well. For this reason, the widely-used SRILM toolkit recently provided switches to modify their pruning algorithm to use another model for state marginal estimates (Stolcke et al., 2011). Second, and perhaps more importantly, the marginal constraints that were applied prior to smoothing will not in general be consistent with the much smaller pruned model. For example, if a bigram parameter is modified due to the presence of some set of trigrams, and then some or all of those trigrams are pruned from the model, the bigram associated with the modified parameter will be unlikely to have an overall expected frequency equal to its observed frequency anymore. As a result, the resulting model degrades dramatically with pruning. In this paper, we present an algorithm that imposes marginal distribution constraints of the sort used in Kneser-Ney modeling on arbitrary smoothed backoff n-gram language models. Our approach makes use of the same sort of derivation as the original Kneser-Ney modeling, but, among other differences, relies on smoothed estimates of the empirical relative frequency rather than the unsmoothed observed frequency. The algorithm can be applied after the smoothed model has been pruned, hence avoiding the pitfalls associated with Kneser-Ney modeling. Furthermore, while Kneser-Ney is conventionally defined as a variant of absolute discounting, our method can be applied to models smoothed with any backoff smoothing, including mixtures of models, widely 44 used for domain adaptation. We next establish formal preliminaries and our smoothed marginal distribution constraints method. 2 Preliminaries N-gram language models are typically presented mathematically in terms of words w, the strings (histories) h that precede them, and the suffixes of the histories (backoffs) h′ that are used in the smoothing recursion. Let V be a vocabulary (alphabet), and V ∗a string of zero or more symbols drawn from V . Let V k denote the set of strings w ∈V ∗of length k, i.e., |w| = k. We will use variables u, v, w, x, y, z ∈V to denote single symbols from the vocabulary; h, g ∈V ∗to denote history sequences preceding the specific word; and h′, g′ ∈V ∗the respective backoff histories of h and g as typically defined (see below). For a string w = w1 . . . w|w| we can calculate the smoothed conditional probability of each word wi in the sequence given the k words that preceded it, depending on the order of the Markov model. Let hk i = wi−k . . . wi−1 be the previous k words in the sequence. Then the smoothed model is defined recursively as follows: P(wi | hk i ) =  P(wi | hk i ) if c(hk i wi) > 0 α(hk i ) P(wi | hk−1 i ) otherwise where c(hk i wi) is the count of the n-gram sequence wi−k . . . wi in the training corpus; P is a regularized probability estimate that provides some probability mass for unobserved n-grams; and α(hk i ) is a factor that ensures normalization. Note that for h = hk i , the typically defined backoff history h′ = hk−1 i , i.e., the longest suffix of h that is not h itself. When we use h′ and g′ (for notational convenience) in future equations, it is this definition that we are using. There are many ways to estimate P, including absolute discounting (Ney et al., 1994), Katz (1987) and Witten and Bell (1991). Interpolated models are special cases of this form, where the P is determined using model mixing, and the α parameter is exactly the mixing factor value for the lower order model. N-gram language models allow for a sparse representation, so that only a subset of the possible ngrams must be explicitly stored. Probabilities for the rest of the n-grams are calculated through the “otherwise” semantics in the equation above. For an n-gram language model G, we will say that an n-gram hw ∈G if it is explicitly represented in the model; otherwise hw ̸∈G. In the standard ngram formulation above, the assumption is that if c(hk i wi) > 0 then the n-gram has a parameter; yet with pruning, we remove many observed n-grams from the model, hence this is no longer the appropriate criterion. We reformulate the standard equation as follows: P(wi|hk i ) =  β(hk i wi) if hk i wi ∈G α(hk i , hk−1 i ) P(wi|hk−1 i ) otherwise (1) where β(hk i wi) is the parameter associated with the n-gram hk i wi and α(hk i , hk−1 i ) is the backoff cost associated with going from state hk i to state hk−1 i . We assume that, if hw ∈G then all prefixes and suffixes of hw are also in G. Figure 1 presents a schema of an automaton representation of an n-gram model, of the sort used in the OpenGrm library (Roark et al., 2012). States represent histories h, and the words w, whose probabilities are conditioned on h, label the arcs, leading to the history state for the subsequent word. State labels are provided in Figure 1 as a convenience, to show the (implicit) history encoded by the state, e.g., ‘xyz’ indicates that the state represents a history with the previous three symbols being x, y and z. Failure arcs, labeled with a φ in Figure 1, encode an “otherwise” semantics and have as destination the origin state’s backoff history. Many higher order states will back off to the same lower order state, specifically those that share the same suffix. Note that, in general, the recursive definition of backoff may require the traversal of several backyz z xyz u/β(xyzu) w/β(yzw) w/β(zw) φ/α(xyz,yz) φ/α(yz,z) zw yyz φ/α(yyz,yz) yzw ε yzu yzv v/β(yyzv) w/β(yyzw) φ/α(z,ε) φ/α(yzw,zw) z/β(z) Figure 1: N-gram weighted automaton schema. State labels are presented for convenience, to specify the history implicitly encoded by the state. 45 off arcs before emitting a word, e.g., the highest order states in Figure 1 needing to traverse a couple of φ arcs to reach state ‘z’. We can define the backoff cost between a state hk i and any of its suffix states as follows. Let α(h, h) = 1 and for m > 1, α(hk i , hk−m i ) = m Y j=1 α(hk−j+1 i , hk−j i ). If hk i w ̸∈G then the probability of that n-gram will be defined in terms of backoff to its longest suffix hk−m i w ∈G. Let hwG denote the longest suffix of h such that hwGw ∈G. Note that this is not necessarily a proper suffix, since hwG could be h itself or it could be ϵ. Then P(w | h) = α(h, hwG) β(hwGw) (2) which is equivalent to equation 1. 3 Marginal distribution constraints Marginal distribution constraints attempt to match the expected frequency of an n-gram with its observed frequency. In other words, if we use the model to randomly generate a very large corpus, the n-grams should occur with the same relative frequency in both the generated and original (training) corpus. Standard smoothing methods overgenerate lower-order n-grams. Using standard n-gram notation (where g′ is the backoff history for g), this constraint is stated in Kneser and Ney (1995) as bP(w | h′) = X g:g′=h′ P(g, w | h′) (3) where bP is the empirical relative frequency estimate. Taking this approach, certain base smoothing methods end up with very nice, easy to calculate solutions based on counts. Absolute discounting (Ney et al., 1994) in particular, using the above approach, leads to the well-known KneserNey smoothing approach (Kneser and Ney, 1995; Chen and Goodman, 1998). We will follow this same approach, with a couple of changes. First, we will make use of regularized estimates of relative frequency P rather than raw relative frequency bP. Second, rather than just looking at observed histories h that back off to h′, we will look at all histories (observed or not) of the length of the longest history in the model. For notational simplicity, suppose we have an n+1-gram model, hence the longest history in the model is of length n. Assume the length of the particular backoff history |h′| = k. Let V n−kh′ be the set of strings h ∈V n with h′ as a suffix. Then we can restate the marginal distribution constraint in equation 3 as P(w | h′) = X h∈V n−kh′ P(h, w | h′) (4) Next we solve for β(h′w) parameters used in equation 1. Note that h′ is a suffix of any h ∈ V n−kh′, so conditioning probabilities on h and h′ is the same as conditioning on just h. Each of the following derivation steps simply relies on the chain rule or definition of conditional probability, as well as pulling terms out of the summation. P(w | h′) = X h∈V n−kh′ P(h, w | h′) = X h∈V n−kh′ P(w | h, h′) P(h | h′) = X h∈V n−kh′ P(w | h) P(h) X g∈V n−kh′ P(g) = 1 X g∈V n−kh′ P(g) X h∈V n−kh′ P(w | h) P(h) (5) Then, multiplying both sides by the normalizing denominator on the right-hand side and using equation 2 to substitute α(h, hwG) β(hwGw) for P(w | h): P(w | h′) X g∈V n−kh′ P(g) = X h∈V n−kh′ P(w | h) P(h) = X h∈V n−kh′ α(h, hwG) β(hwGw) P(h) (6) Note that we are only interested in h′w ∈G, hence there are two disjoint subsets of histories h ∈V n−kh′ that are being summed over: those such that hwG = h′ and those such that |hwG| > |h′|. We next separate these sums in the next step of the derivation: P(w | h′) X g∈V n−kh′ P(g) = X h∈V n−kh′:|hwG|>|h′| α(h, hwG) β(hwGw) P(h) + X h∈V n−kh′:hwG=h′ α(h, h′) β(h′w) P(h) (7) Finally, we solve for β(h′w) in the second sum on the right-hand side of equation 7, yielding the formula in equation 8. Note that this equation is the correlate of equation (6) in Kneser and Ney 46 β(h′w) = P(w | h′) X g∈V n−kh′ P(g) − X h∈V n−kh′:|hwG|>|h′| α(h, hwG) β(hwGw) P(h) X h∈V n−kh′:hwG=h′ α(h, h′) P(h) (8) (1995), modulo the two differences noted earlier: use of smoothed probability P rather than raw relative frequency; and summing over all history substrings in V n−kh′ rather than just those with count greater than zero, which is also a change due to smoothing. Keep in mind, P is the target expected frequency from a given smoothed model. KneserNey models are not useful input models, since their P n-gram parameters are not relative frequency estimates. This means that we cannot simply ‘repair’ pruned Kneser-Ney models, but must use other smoothing methods where the smoothed values are based on relative frequency estimation. There are, in addition, two other important differences in our approach from that in Kneser and Ney (1995), which would remain as differences even if our target expected frequency were the unsmoothed relative frequency bP instead of the smoothed estimate P. First, the sum in the numerator is over histories of length n, the highest order in the n-gram model, whereas in the KneserNey approach the sum is over histories that immediately back off to h′, i.e., from the next highest order in the n-gram model. Thus the unigram distribution is with respect to the bigram model, the bigram model is with respect to the trigram model, and so forth. In our optimization, we sum instead over all possible history sequences of length n. Second, an early assumption made in Kneser and Ney (1995) is that the denominator term in their equation (6) (our Eq. 8) is constant across all words for a given history, which is clearly false. We do not make this assumption. Of course, the probabilities must be normalized, hence the final values of β(h′w) will be proportional to the values in Eq. 8. We briefly note that, like Kneser-Ney, if the baseline smoothing method is consistent, then the amount of smoothing in the limit will go to zero and our resulting model will also be consistent. The smoothed relative frequency estimate P and higher order β values on the right-hand side of Eq. 8 are given values (from the input smoothed model and previous stages in the algorithm, respectively), implying an algorithm that estimates highest orders of the model first. In addition, steady state history probabilities P(h) must be calculated. We turn to the estimation algorithm next. 4 Model constraint algorithm Our algorithm takes a smoothed backoff n-gram language model in an automaton format (see Figure 1) and returns a smoothed backoff n-gram language model with the same topology. For all ngrams in the model that are suffixes of other ngrams in the model – i.e., that are backed-off to – we calculate the weight provided by equation 8 and assign it (after normalization) to the appropriate n-gram arc in the automaton. There are several important considerations for this algorithm, which we address in this section. First, we must provide a probability for every state in the model. Second, we must memoize summed values that are used repeatedly. Finally, we must iterate the calculation of certain values that depend on the n-gram weights being re-estimated. 4.1 Steady state probability calculation The steady state probability P(h) is taken to be the probability of observing h after a long word sequence, i.e., the state’s relative frequency in a long sequence of randomly-generated sentences from the model: P(h) = lim m→∞ X w1...wm ˆP(w1 . . . wmh) (9) where ˆP is the corpus probability derived as follows: The smoothed n-gram probability model P(w | h) is naturally extended to a sentence s = w0 . . . wl, where w0 = <s> and wl = </s> are the sentence initial and final words, by P(s) = Ql i=1 P(wi | hn i ). The corpus probability s1 . . . sr is taken as: ˆP(s1 . . . sr) = (1 −λ)λr−1 r Y i=1 P(si) (10) where λ parameterizes the corpus length distribution.2 Assuming the n-gram language model automaton G has a single final state </s> into 2ˆP models words in a corpus rather than a single sentence since Equation 9 tends to zero as m →∞otherwise. In Markov chain terms, the corpus distribution is made irreducible to allow a non-trivial stationary distribution. 47 which all </s> arcs enter, adding a λ weighted ϵ arc from the </s> state to the initial state and having a final weight 1 −λ in order to leave the automaton at the </s> state will model this corpus distribution. According to Eq. 9, P(h) is then the stationary distribution of the finite irreducible Markov Chain defined by this altered automaton. There are many methods for computing such a stationary distribution; we use the well-known power method (Stewart, 1999). One difficulty remains to be resolved. The backoff arcs have a special interpretation in the automaton: they are traversed only if a word fails to match at the higher order. These failure arcs must be properly handled before applying standard stationary distribution calculations. A simple approach would be for each word w′ and state h such that hw′ /∈G, but h′w′ ∈G, add a w′ arc from state h to h′w′ with weight α(h, h′)β(h′w′) and then remove all failure arcs (see Figure 2a). This however results in an automaton with |V | arcs leaving every state, which is unwieldy with larger vocabularies and n-gram orders. Instead, for each word w and state h such that hw ∈G, add a w arc from state h to h′w with weight −α(h, h′)β(h′w) and then replace all failure labels with ϵ labels (see Figure 2b). In this case, the added negativelyweighted arcs compensate for the excess probability mass allowed by the epsilon arcs3. The number of added arcs is no more than found in the original model. 4.2 Accumulation of higher order values We are summing over all possible histories of length n in equation 8, and the steady state probability calculation outlined in the previous section includes the probability mass for histories h ̸∈G. The probability mass of states not in G ends up being allocated to the state representing their longest suffix that is explicitly in G. That is the state that would be active when these histories are encountered. Hence, once we have calculated the steady state probabilities for each state in the smoothed model, we only need to sum over states explicitly in the model. As stated earlier, the use of β(hwGw) in the numerator of equation 8 for hwG that are larger than h′ implies that the longer n-grams must be 3Since each negatively-weighted arc leaving a state exactly cancels an epsilon arc followed by a matching positively-weighted arc in each iteration of the power method, convergence is assured. (a) (b) h h' w/β(hw) w'/β(h'w') φ/α(h,h') hw h'w' w'/α(h,h') β(h'w') h h' w/β(hw) w/β(h'w) ε/α(h,h') hw h'w w/-α(h,h') β(h'w) Figure 2: Schemata showing failure arc handling: (a) φ removal: add w′ arc (red), delete φ arc; (b) φ replacement: add w arc (red), replace φ by ϵ (red) re-estimated first. Thus we process each history length in descending order, finishing with the unigram state. Since we assume that, for every ngram hw ∈G, every prefix and suffix is also in G, we know that if h′w ̸∈G then there is no history h such that h′ is a suffix of h and hw ∈G. This allows us to recursively accumulate the α(h, h′) P(h) in the denominator of Eq. 8. For every n-gram, we can accumulate values required to calculate the three terms in equation 8, and pass them along to calculate lower order ngram values. Note, however, that a naive implementation of an algorithm to assign these values is O(|V |n). This is due to the fact that the denominator factor must be accumulated for all higher order states that do not have the given n-gram. Hence, for every state h directly backing off to h′ (order |V |), and for every n-gram arc leaving state h′ (order |V |), some value must be accumulated. This can be particularly clearly seen at the unigram state, which has an arc for every unigram (the size of the vocabulary): for every bigram state (also order of the vocabulary), in the naive algorithm we must look for every possible arc. Since there are O(|V |n−2) lower order histories in the model in the worst case, we have overall complexity O(|V |n). However, we know that the number of stored n-grams is very sparse relative to the possible number of n-grams, so the typical case complexity is far lower. Importantly, the denominator is calculated by first assuming that all higher order states back off to the current n-gram, then subtract out the mass associated with those that are already observed at the higher order. In such a way, we need only perform work for higher order n-grams hw that are explicitly in the model. This optimization achieves orders-of-magnitude speedups, so that models take seconds to process. Because smoothing is not necessarily con48 strained across n-gram orders, it is possible that higher-order n-grams could be smoothed less than lower order n-grams, so that the numerator of equation 8 can be less than zero, which is not valid. A value less than zero means that the higher order n-grams will already produce the n-gram more frequently than its smoothed expected frequency. We set a minimum value ϵ for the numerator, and any n-gram numerator value less than ϵ is replaced with ϵ (for the current study, ϵ = 0.001). We find this to be relatively infrequent, about 1% of n-grams for most models. 4.3 Iteration Recall that P and β terms on the right-hand side of equation 8 are given and do not change. But there are two other terms in the equation that change as we update the n-gram parameters. The α(h, h′) backoff weights in the denominator ensure normalization at the higher order states, and change as the n-gram parameters at the current state are modified. Further, the steady state probabilities will change as the model changes. Hence, at each state, we must iterate the calculation of the denominator term: first adjust n-gram weights and normalize; then recalculate backoff weights at higher order states and iterate. Since this only involves the denominator term, each n-gram weight can be updated by multiplying by the ratio of the old term and the new term. After the entire model has been re-estimated, the steady state probability calculation presented in Section 4.1 is run again and model estimation happens again. As we shall see in the experimental results, this typically converges after just a few iterations. At this time, we have no convergence proofs for either of these iterative components to the algorithm, but expect that something can be said about this, which will be a priority in future work. 5 Experimental results All results presented here are for English Broadcast News. We received scripts for replicating the Chelba et al. (2010) results from the authors, and we report statistics on our replication of their paper’s results in Table 2. The scripts are distributed in such a way that the user supplies the data from LDC98T31 (1996 CSR HUB4 Language Model corpus) and the script breaks the collection into training and testing sets, normalizes the text, and Smoothing Perplexity n-grams (×1000) method full pruned model diff Abs.Disc. 120.4 197.1 382.3 -1.1 Witten-Bell 118.7 196.1 379.3 -1.1 Ristad 126.2 203.4 394.6 -1.1 Katz 119.7 197.9 385.1 -1.1 Kneser-Ney† 114.4 234.1 375.4 -12.7 Table 2: Replication of Chelba et al. (2010) using provided script. Using the script, the size of the unpruned model is 31,091,219 ngrams, 4,041 fewer than Chelba et al. (2010). † Kneser-Ney model pruned using -prune-history-lm switch in SRILM. trains and prunes the language models using the SRILM toolkit (Stolcke et al., 2011). Presumably due to minor differences in text normalization, resulting in very slightly fewer n-grams in all conditions, we achieve negligibly lower perplexities (one or two tenths of a point) in all conditions, as can be seen when comparing with Table 1. All of the same trends result, thus that paper’s result is successfully replicated here. Note that we ran our Kneser-Ney pruning (noted with a † in the table), using the new -prune-history-lm switch in SRILM – created in response to the Chelba et al. (2010) paper – which allows the use of another model to calculate the state marginals for pruning. This fixes part of the problem – perplexity does not degrade as much as the Kneser-Ney pruned model in Table 1 – but, as argued earlier in this paper, this is not the sole reason for the degradation and the perplexity remains extremely inflated. We follow Chelba et al. (2010) in training and test set definition, vocabulary size, and parameters for reporting perplexity. Note that unigrams in the models are never pruned, hence all models assign probabilities over an identical vocabulary and perplexity is comparable across models. For all results reported here, we use the SRILM toolkit for baseline model training and pruning, then convert from the resulting ARPA format model to an OpenFst format (Allauzen et al., 2007), as used in the OpenGrm n-gram library (Roark et al., 2012). We then apply the marginal distribution constraints, and convert the result back to ARPA format for perplexity evaluation with the SRILM toolkit. All models are subjected to full normalization sanity checks, as with typical model functions in the OpenGrm library. Recall that our algorithm assumes that, for every n-gram in the model, all prefix and suffix ngrams are also in the model. For pruned models, the SRILM toolkit does not impose such a requirement, hence explicit arcs are added to the 49 Perplexity n-grams Smoothing Pruned Pruned (×1000) Method Model +MDC ∆ in WFST Abs.Disc. 197.1 187.4 9.7 389.2 Witten-Bell 196.1 185.7 10.4 385.0 Ristad 203.4 190.3 13.1 395.9 Katz 197.9 187.5 10.4 390.8 AD,WB,Katz Mixture 196.6 186.3 10.3 388.7 Table 3: Perplexity reductions achieved with marginal distribution constraints (MDC) on the heavily pruned models from Chelba et al. (2010), and a mixture model. WFST ngram counts are slightly higher than ARPA format in Table 2 due to adding prefix and suffix n-grams. model during conversion, with probability equal to what they would receive in the the original model. The resulting model is equivalent, but with a small number of additional arcs in the explicit representation (around 1% for the most heavily pruned models). Table 3 presents perplexity results for models that result from applying our marginal distribution constraints to the four heavily pruned models from Table 2. In all four cases, we get perplexity reductions of around 10 points. We present the number of n-grams represented explicitly in the WFST, which is a slight increase from those presented in Table 2 due to the reintroduction of prefix and suffix n-grams. In addition to the four models reported in Chelba et al. (2010), we produced a mixture model by interpolating (with equal weight) smoothed ngram probabilities from the full (unpruned) absolute discounting, Witten-Bell and Katz models, which share the same set of n-grams. After renormalizing and pruning to approximately the same size as the other models, we get commensurate gains using this model as with the other models. Figure 3 demonstrates the importance of iterating the steady state history calculation. All of the methods achieve perplexity reductions with subsequent iterations. Katz and absolute discounting achieve very little reduction in the first iteration, but catch back up in the second iteration. The other iterative part of the algorithm, discussed in Section 4.3, is the denominator of equation 8, which changes due to adjustments in the backoff weights required by the revised n-gram probabilities. If we do not iteratively update the backoff weights when reestimating the weights, the ‘Pruned+MDC’ perplexities in Table 3 increase by between 0.2–0.4 points. Hence, iterating the steady state probability calculation is quite important, as illustrated by Figure 3; iterating the 0 1 2 3 4 5 6 180 185 190 195 200 205 Iterations of estimation (recalculating steady state probs) Perplexity Witten−Bell Ristad Katz Absolute Discounting WB,AD,Katz mixture Figure 3: Models resulting from different numbers of parameter re-estimation iterations. Iteration 0 is the baseline pruned model. denominator calculation much less so, at least for these models. We noted in Section 3 that a key difference between our approach and Kneser and Ney (1995) is that their approach treated the denominator as a constant. If we do this, the ‘Pruned+MDC’ perplexities increase by between 4.5–5.6 points, i.e., about half of the perplexity reduction is lost for each method. Thus, while iteration of denominator calculation may not be critical, it should not be treated as a constant. We now look at the impacts on system performance we can achieve with these new models4, and whether the perplexity differences that we observe translate to real error rate reductions. For automatic speech recognition experiments, we used as test set the 1997 Hub4 evaluation set consisting of 32,689 words. The acoustic model is a tied-state triphone GMM-based HMM whose input features are 9-frame stacked 13-dimensional PLP-cepstral coefficients projected down to 39 dimensions using LDA. The model was trained on the 1996 and 1997 Hub4 acoustic model training sets (about 150 hours of data) using semi-tied covariance modeling and CMLLR-based speaker adaptive training and 4 iterations of boosted MMI. We used a multi-pass decoding strategy: two quick passes for adaptation supervision, CMLLR and MLLR estimation; then a slower full decoding pass running about 3 times slower than real time. Table 4 presents recognition results for the heavily pruned models that we have been considering, both for first pass decoding and rescoring of the resulting lattices using failure transitions rather than epsilon backoff approximations. 4For space purposes, we exclude the Ristad method from this point forward since it is not competitive with the others. 50 Word error rate (WER) First pass Rescoring Smoothing Pruned Pruned Pruned Pruned Method Model +MDC Model +MDC Abs.Disc. 20.5 19.7 20.2 19.6 Witten-Bell 20.5 19.9 20.1 19.6 Katz 20.5 19.7 20.2 19.7 Mixture 20.5 19.6 20.2 19.6 Kneser-Neya 22.1 22.2 Kneser-Neyb 20.5 20.6 Table 4: WER reductions achieved with marginal distribution constraints (MDC) on the heavily pruned models from Chelba et al. (2010), and a mixture model. KneserNey results are shown for: a) original pruning; and b) with -prune-history-lm switch. The perplexity reductions that were achieved for these models do translate to real word error rate reductions at both stages of between 0.5 and 0.9 percent absolute. All of these gains are statistically significant at p < 0.0001 using the stratified shuffling test (Yeh, 2000). For pruned Kneser-Ney models, fixing the state marginals with the -prune-history-lm switch reduces the WER versus the original pruned model, but no reductions were achieved vs. baseline methods. Table 5 presents perplexity and WER results for less heavily pruned models, where the pruning thresholds were set to yield approximately 1.5 million n-grams (4 times more than the previous models); and another set at around 5 million n-grams, as well as the full, unpruned models. While the robust gains we’ve observed up to now persist with the 1.5M n-gram models (WER reductions significant, Witten-Bell at p < 0.02, others at p < 0.0001), the larger models yield diminishing gains, with no real WER improvements. Performance of Witten-Bell models with the marginal distribution constraints degrade badly for the larger models, indicating that this method of regularization, unmodified by aggressive pruning, does not provide a well suited distribution for this sort of optimization. We speculate that this is due to underregularization, having noted some floating point precision issues when allowing the backoff recalculation to run indefinitely. 6 Summary and Future Directions The presented method reestimates lower order n-gram model parameters for a given smoothed backoff model, achieving perplexity and WER reductions for many smoothed models. There remain a number of open questions to investigate in the future. Recall that the numerator in Eq. 8 can be less than zero, meaning that no parameterization would lead to a model with the target frequency of the lower order n-gram, presumably due to over- or under-regularization. We anticipate a pre-constraint on the baseline smoothing method, that would recognize this problem and adjust the smoothing to ensure that a solution does exist. Additionally, it is clear that different regularization methods yield different behaviors, notably that large, relatively lightly pruned WittenBell models yield poor results. We will look to identify the issues with such models and provide general guidelines for prepping models prior to processing. Finally, we would like to perform extensive controlled experimentation to examine the relative contribution of the various aspects of our approach. Acknowledgments Thanks to Ciprian Chelba and colleagues for the scripts to replicate their results. This work was supported in part by a Google Faculty Research Award and NSF grant #IIS-0964102. Any opinions, findings, conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the NSF. M Less heavily pruned model Moderately pruned model Full model Smoothing D ngrams WER ngrams WER ngrams WER Method C (×106) PPL FP RS (×106) PPL FP RS (×106) PPL FP RS Abs. N 1.53 146.6 18.1 17.9 5.19 129.1 17.0 16.6 31.1 120.4 16.2 16.1 Disc. Y 141.2 17.2 17.2 126.3 16.6 16.6 31.1 117.0 16.0 16.0 WittenN 1.54 145.8 18.1 17.6 5.08 129.4 17.3 16.8 31.1 118.7 16.3 16.1 Bell Y 139.7 17.9 17.4 126.4 18.4 17.3 31.1 118.4 18.1 17.6 Katz N 1.57 146.6 17.8 17.7 5.10 128.9 16.8 16.6 31.1 119.7 16.2 16.1 Y 141.1 17.3 17.3 125.7 16.6 16.6 31.1 114.7 16.2 16.1 Mixture N 1.55 145.5 18.1 17.7 5.11 128.2 16.9 16.6 31.1 118.5 16.3 16.1 Y 139.2 17.3 17.2 123.6 16.6 16.4 31.1 114.6 17.3 16.4 Kneser-Ney backoff model, unpruned: 31.1 114.4 15.8 15.9 Table 5: Perplexity (PPL) and both first pass (FP) and rescoring (RS) WER reductions for less heavily pruned models using marginal distribution constraints (MDC). 51 References Cyril Allauzen, Michael Riley, Johan Schalkwyk, Wojciech Skut, and Mehryar Mohri. 2007. OpenFst: A general and efficient weighted finite-state transducer library. In Proceedings of the Twelfth International Conference on Implementation and Application of Automata (CIAA 2007), Lecture Notes in Computer Science, volume 4793, pages 11–23. Ciprian Chelba, Thorsten Brants, Will Neveitt, and Peng Xu. 2010. Study on interaction between entropy pruning and Kneser-Ney smoothing. In Proceedings of Interspeech, page 24222425. Stanley Chen and Joshua Goodman. 1998. An empirical study of smoothing techniques for language modeling. Technical Report, TR-10-98, Harvard University. Joshua Goodman. 2001. A bit of progress in language modeling. Computer Speech and Language, 15(4):403–434. Slava M. Katz. 1987. Estimation of probabilities from sparse data for the language model component of a speech recogniser. IEEE Transactions on Acoustic, Speech, and Signal Processing, 35(3):400–401. Reinhard Kneser and Hermann Ney. 1995. Improved backing-off for m-gram language modeling. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pages 181–184. Hermann Ney, Ute Essen, and Reinhard Kneser. 1994. On structuring probabilistic dependences in stochastic language modeling. Computer Speech and Language, 8:1–38. Eric S. Ristad. 1995. A natural law of succession. Technical Report, CS-TR-495-95, Princeton University. Brian Roark, Richard Sproat, Cyril Allauzen, Michael Riley, Jeffrey Sorensen, and Terry Tai. 2012. The OpenGrm open-source finite-state grammar software libraries. In Proceedings of the ACL 2012 System Demonstrations, pages 61–66. Kristie Seymore and Ronald Rosenfeld. 1996. Scalable backoff language models. In Proceedings of the International Conference on Spoken Language Processing (ICSLP). Vesa Siivola, Teemu Hirsimaki, and Sami Virpioja. 2007. On growing and pruning kneserney smoothed n-gram models. IEEE Transactions on Audio, Speech, and Language Processing, 15(5):1617– 1624. William J Stewart. 1999. Numerical methods for computing stationary distributions of finite irreducible markov chains. Computational Probability, pages 81–111. Andreas Stolcke, Jing Zheng, Wen Wang, and Victor Abrash. 2011. Srilm at sixteen: Update and outlook. In Proceedings of the IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). Andreas Stolcke. 1998. Entropy-based pruning of backoff language models. In Proc. DARPA Broadcast News Transcription and Understanding Workshop, pages 270–274. David Talbot and Thorsten Brants. 2008. Randomized language models via perfect hash functions. In Proceedings of ACL-08: HLT, pages 505–513. David Talbot and Miles Osborne. 2007. Smoothed Bloom filter language models: Tera-scale LMs on the cheap. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 468–476. Ian H. Witten and Timothy C. Bell. 1991. The zerofrequency problem: Estimating the probabilities of novel events in adaptive text compression. IEEE Transactions on Information Theory, 37(4):1085– 1094. A. Yeh. 2000. More accurate tests for the statistical significance of result differences. In Proceedings of the 18th International COLING, pages 947–953. 52
2013
5
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 507–516, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Feature-Based Selection of Dependency Paths in Ad Hoc Information Retrieval K. Tamsin Maxwell School of Informatics University of Edinburgh Edinburgh EH8 9AB, UK [email protected] Jon Oberlander School of Informatics University of Edinburgh Edinburgh EH8 9AB, UK [email protected] W. Bruce Croft Dept. of Computer Science University of Massachusetts Amherst, MA 01003, USA [email protected] Abstract Techniques that compare short text segments using dependency paths (or simply, paths) appear in a wide range of automated language processing applications including question answering (QA). However, few models in ad hoc information retrieval (IR) use paths for document ranking due to the prohibitive cost of parsing a retrieval collection. In this paper, we introduce a flexible notion of paths that describe chains of words on a dependency path. These chains, or catenae, are readily applied in standard IR models. Informative catenae are selected using supervised machine learning with linguistically informed features and compared to both non-linguistic terms and catenae selected heuristically with filters derived from work on paths. Automatically selected catenae of 1-2 words deliver significant performance gains on three TREC collections. 1 Introduction In the past decade, an increasing number of techniques have used complex and effective syntactic and semantic features to determine the similarity, entailment or alignment between short texts. These approaches are motivated by the idea that sentence meaning can be flexibly captured by the syntactic and semantic relations between words, and encoded in dependency parse tree fragments. Dependency paths (or simply, paths) are compared using techniques such as tree edit distance (Punyakanok et al., 2004; Heilman and Smith, 2010), relation probability (Gao et al., 2004) and parse tree alignment (Wang et al., 2007; Park et al., 2011). Much work on sentence similarity using dependency paths focuses on question answering (QA) where textual inference requires attention to linguistic detail. Dependency-based techniques can also be highly effective for ad hoc information retrieval (IR) (Park et al., 2011). However, few path-based methods have been explored for ad hoc IR, largely because parsing large document collections is computationally prohibitive. In this paper, we explore a flexible application of dependency paths that overcomes this difficulty. We reduce paths to chains of words called catenae (Osborne and Groß, 2012) that capture salient semantic content in an underspecified manner. Catenae can be used as lexical units in a reformulated query to explicitly indicate important word relationships while retaining efficient and flexible proximity matching. Crucially, this does not require parsing documents. Moreover, catenae are compatible with a variety of existing IR models. We hypothesize that catenae identify most units of salient knowledge in text. This is because they are a condition for ellipsis, in which salient knowledge can be successfully omitted from text (Osborne and Groß, 2012). To our knowledge, this paper is the first time that catenae are proposed as a means for term selection in IR, and where ellipsis is considered as a means for identification of semantic units. We also extend previous work with development of a linguistically informed, supervised machine learning technique for selection of informative catenae. Previous heuristic filters for dependency paths (Lin and Pantel, 2001; Shen et al., 2005; Cui et al., 2005) can exclude informative relations. Alternatively, treating all paths as equally informative (Punyakanok et al., 2004; Park et al., 2011; Moschitti, 2008) can generate noisy word relations and is computationally intensive. The challenge of path selection is that no explicit information in text indicates which paths are relevant. Consider the catenae captured by heuristic filters for the TREC1 query, ‘What role does blood-alcohol level play in automobile accident fatalities’ (#358, Table 1). It may appear obvious that the component words of ‘role play’ 1Text REtrieval Conference, see http://trec.nist.gov/ 507 blood alcohol level play auto accident accident fatal role play play fatal blood alcohol play play accident fatal auto accident fatal level play fatal role play fatal role level play blood alcohol level play auto accident accident fatal role blood alcohol level play auto blood alcohol level play auto accident accident fatal role play play fatal Catenae Sequential dependence Governor3dependent Query: What role does blood-alcohol level play in automobile* accident fatalities*? (*abbreviated to `auto', `fatal') auto accident accident fatal play fatal play accident fatal auto accident fatal Predicate3argument auto accident accident fatal auto accident fatal level play fatal role play fatal Nominal end slots Table 1: Catenae derived from dependency paths, as selected by heuristic methods. Selections are compared to sequential bigrams that use no linguistic knowledge. and ‘level play’ do not have an important semantic relationship relative to the query, yet these catenae are described by parent-child relations that are commonly used to filter paths in text processing applications. Alternative filters that avoid such trivial word combinations also omit descriptions of key entities such as ‘blood alcohol’, and identify longer catenae that may be overly restrictive. These shortcomings suggest that an optimized selection process may improve performance of techniques that use dependency paths in ad hoc IR. We identify three previously proposed selection methods, and compare them on the task of catenae selection for ad hoc IR. Selections are tested using three TREC collections: Robust04, WT10G, and GOV2. This provides a diverse platform for experiments. We also develop a linguistically informed machine learning technique for catenae selection that captures both key aspects of heuristic filters, and novel characteristics of catenae and paths. The basic idea is that selection, or weighting, of catenae can be improved by features that are specific to paths, rather than generic for all terms. Results show that our selection method is more effective in identifying key catenae compared to previously proposed filters. Integration of the identified catenae in queries also improves IR effectiveness compared to a highly effective baseline that uses sequential bigrams with no linguistic knowledge. This model represents the obvious alternative to catenae for term selection in IR. The rest of this paper is organised as follows. §2 reviews related work, §3 describes catenae and their linguistic motivation and §4 describes our selection method. §5 evaluates classification experiments using the supervised filter. §6 presents the results of experiments in ad hoc IR. Finally, §7 concludes the paper. 2 Related work Techniques that compare short text segments using dependency paths are applied to a wide range of automated language processing tasks, including paraphrasing, summarization, entailment detection, QA, machine translation and the evaluation of word, phrase and sentence similarity. A generic approach uses a matching function to compare a dependency path between any two stemmed terms x and y in a sentence A with any dependency path between x and y in sentence B. The match score for A and B is computed over all dependency paths in A. In QA this approach improves question representation, answer selection and answer ranking compared to methods that use bag-of-words and ngram features (Surdeanu et al., 2011). For example, Lin and Pantel (2001) present a method to derive paraphrasing rules for QA using analysis of paths that connect two nouns; Echihabi and Marcu (2003) align all paths in questions with trees for heuristically pruned answers; Cui et al. (2005) score answers using a variation of the IBM translation model 1; Wang et al. (2007) use quasi-synchronous translation to map all parent-child paths in a question to any path in an answer; and Moschitti (2008) explores syntactic and semantic kernels for QA classification. In ad hoc IR, most models of term dependence use word co-occurrence and proximity (Song and Croft, 1999; Metzler and Croft, 2005; Srikanth and Srihari, 2002; van Rijsbergen, 1993). Syntactic language models for IR are a significant departure from this trend (Gao et al., 2004; Lee et al., 2006; Cai et al., 2007; Maisonnasse et al., 2007) that use dependency paths to address long-distance dependencies and normalize spurious differences in surface text. Paths are constrained in both 508 prd loc pmod loc pmod Is polio under control in China ? X1 X2 X3 X4 X5 X6 polio polio control control control China China polio control China polio under control control in China polio under control in China loc pmod loc pmod loc pmod loc pmod Catenae 'stoplisted. Dependency paths Figure 1: Catenae are an economical and intuitive representation of dependency paths. queries and documents to parent-child relations. In contrast, (Park et al., 2011) present a quasisynchronous translation model for IR that does not limit paths. This is based on the observation that semantically related words have a variety of direct and indirect relations. All of these models require parsing of an entire document collection. Techniques using dependency paths in both QA and ad hoc IR show promising results, but there is no clear understanding of which path constraints result in the greatest IR effectiveness. We directly compare selections of catenae as a simplified representation of paths. In addition, a vast number of methods have been presented for term weighting and selection in ad hoc IR. Our supervised selection extends the successful method presented by Bendersky and Croft (2008) for selection and weighting of query noun phrases (NPs). It also extends work for determining the variability of governor-dependent pairs (Song et al., 2008). In contrast to this work, we apply linguistic features that are specific to catenae and dependency paths, and select among units containing more than two content-bearing words. 3 Catenae as semantic units Catenae (Latin for ‘chain’, singular catena) are dependency-based syntactic units. This section outlines their unique semantic properties. A catena is defined on a dependency graph that has lexical nodes (or words) linked by binary asymmetrical relations called dependencies. Dependencies hold between a governor and a dependent and may be syntactic or semantic in nature (Nivre, 2005). A dependency graph is usually acyclic such that each node has only one governor, and one root node of the tree does not depend on any other node. A catena is a word, or sequence of words that are continuous with respect to a walk on a dependency Is polio under control in China, and is polio under control in India? Antecedent First conjunct: Antecedent clause Second conjunct: Elliptical/target clause Elided text Remnant Figure 2: Ellipsis in a coordinated construct. graph. For example, Fig. 1 shows a dependency parse that generates 21 catenae in total: (using i for Xi) 1, 2, 3, 4, 5, 6, 12, 23, 34, 45, 56, 123, 234, 345, 456, 1234, 2345, 3456, 12345, 23456, 123456. We process catenae to remove stop words on the INQUERY stoplist (Allan et al., 2000) and lexical units containing 18 TREC description stop words such as ‘describe’. This results in a reduced set of catenae as shown in Fig. 1. A dependency path is ordered and includes both word tokens and the relations between them. In contrast, a catena is a set of word types that may be ordered or partially ordered. A catena is an economical, intuitive lexical unit that corresponds to a dependency path and is argued to play an important role in syntax (Osborne et al., 2012). In this paper, we explore catenae instead of paths for ad hoc IR due to their suitability for efficient IR models and flexible representation of language semantics. Specifically, we note that catenae identify words that can be omitted in elliptical constructions (Osborne et al., 2012). They thus represent salient semantic information in text. To clarify this insight, we briefly review catenae in ellipsis. 3.1 Semantic units in ellipsis Fig. 2 shows terminology for the phenomenon of ellipsis. The omitted words are called elided text, and words that could be omitted, but are not, we call elliptical candidates. Ellipsis relies on the logical structure of a coordinated construction in which two or more elements, such as sentences, are joined by a conjunctive word or phrase such as ‘and’ or ‘more than’. A coordinated structure is required because the omitted words are ‘filled in’ by assuming a parallel relation p between the first and second conjunct. In ellipsis, p is omitted and its arguments are retained in text. In order for ellipsis to be successful and grammatically correct, p must be salient shared knowledge at the time of communication (Prince, 1986; Steedman, 1990). If p is salient then the omitted text can be inferred. If p is not salient then the omission of words merely results in ungrammatical, or incoherent, sentences. This framework is practically illustrated in Fig. 509 Is polio under control in China, and 3is polio under control. in India ? Is polio under control in China, and is cancer under observation 3in China7 ? * Is polio under control in China, and 3is7 cancer 2under. observation 3in China7 ? * Is polio under control in China, and 3is polio. under 2control in7 India ? Ellipsis candidates marked in italics: they are catenae a7 in India ? b7 is cancer under observation ? c7 * cancer observation ? d7 * under India ? Is polio under control in China, and... Ellided sentences Figure 3: For ellipsis to be successful, elided words must be catenae. Ellipsis candidates are catenae2. Is polio under control in China ? X1 X2 X3 X4 X5 X6 Figure 4: A parse in which ‘polio China’ is a catena. 3 for the query, ‘Is polio under control in China?’. Sentences marked by * are incoherent, and it is evident that the omitted words do not form a salient semantic unit. They also do not form catenae. In contrast, the omitted words in successful ellipsis do form catenae, and they represent informative word combinations with respect to the query. This observation leads us to an ellipsis hypothesis: Ellipsis hypothesis: For queries formulated into coordinated structures, the subset of catenae that are elliptical candidates identify the salient semantic units in the query. 3.2 Limitations of paths and catenae The prediction of salient semantic units by catenae is quite robust. However, there are two problems that can limit the effectiveness of any technique that uses catenae or dependency paths in IR. 1) Syntactic ambiguity: We make the simplifying assumption that the most probable parse of a query is accurate and sufficient for the extraction of relevant catenae. However, this is not always true. For example, the sentence ‘Is polio under control in China, and under observation ?’ constitutes successful ellipsis. The elided words ‘polio in china’ are relevant to a base query, ‘Is polio under control in China?’. Unfortunately, in Fig. 1 the elided text does not qualify as a catena. A parse with alternative prepositional phrase attachment is shown in Fig. 4. Here, the successfully elided text does qualify as a catena. This highlights the fact that a single dependency parse may only partially represent the ambiguous semantics of a query. More accurate parsing does not address this problem. 2) Rising: Automatic extraction of catenae is limited by the phenomenon of rising. Let the used a toxic chemical as a weapon X4 X3 X2 X1 X5 X6 X7 Standard structure A toxic chemical used as a weapon X3 X2 X1 X4g X5 X6 X7 Rising structure Figure 5: A parse with and without rising. The dashed dependency edge marks where a head is not also the governor and the g-script marks the governor of the risen catena. governor of a catena be the word that licenses it (in Fig. 5 ‘used’ licenses ‘a toxic chemical’ e.g. ‘used what?’). Let the head of a catena be its parent in a dependency tree. Rising occurs when the head is not the same as the governor. This is frequently seen with wh-fronting questions that start who, what etc., as well as with many other syntactic discontinuities (Osborne and Groß, 2012). More specifically, rising occurs when a catena is separated from its governor by words that its governor does not dominate, or the catena dominates the governor, as in Fig. 5. Note that in the risen structure, the words for the catena ‘chemical as a weapon’ are discontinuous on the surface, interrupted by the word ‘used’. 4 Selection method for catenae Catenae describe relatively few of the possible word combinations in a sentence, but still include many combinations that do not result in successful ellipsis and are not informative for IR. This section describes our supervised method for selection of informative catenae. Candidate catenae are identified using two constraints that enable more efficient extraction: stopwords are removed, and stopped catenae must contain fewer than four words (single words are permitted). We use a pseudo-projective joint dependency parse and semantic role labelling system (Johansson and 510 Nugues, 2008) to generate the dependency parse. This enables us to explore semantic classification features and is highly accurate. However, any dependency parser may be applied instead. For comparison, catenae extracted from 500 queries using the Stanford dependency parser (de Marneffe et al., 2006) overlap with 77% of catenae extracted from the same queries using the applied parser. 4.1 Feature Classes Four feature classes are presented in Table 2: Ellipsis candidates: The ellipsis hypothesis suggests that informative catenae are elliptical candidates. However, queries are not in the coordinated structures required for ellipsis. To enable extraction of characteristic features we (a) construct a coordinated query by adding the query to itself; and (b) elide catenae from the second conjunct. For example, for the query, Is polio under control in China? we have: (a) Is polio under control in China, and is polio under control in China? (b) Is polio under control in China, and is polio in China? We refer to the words in (b) as the query remainder and use this to identify features detailed in Table 2. Dependency path features: Part-of-speech tags and semantic roles have been used to filter dependency paths. We identify several features that use these characteristics from prior work (Table 2). In addition, variability in the separation distance in documents observed for words that have governor-dependent relations in queries has been proposed for identification of promising paths (Song et al., 2008). We also observe that due to the phenomenon of rising, words that form catenae can be discontinuous in text, and the ability of catenae to match similar word combinations is limited by variability of how they appear in documents. Thus, we propose features for separation distance, but use efficient collection statistics rather than summing statistics for every document in a collection. Co-occurrence features: A governor w1 tends to subcategorize for its dependents wn. This means that w1 often determines the choice of wn. We conclude that co-occurrence is an important feature of dependency relations (Mel’˘cuk, 2003). In addition, term frequencies and inverse document frequencies calculated using word co-occurrence measures are commonly used in IR. We use features previously proposed for filtering terms in IR (Bendersky and Croft, 2008) with two methods to normalize co-occurrence counts for catenae of different lengths: a factor |c||c|, where |c| is the number of words in catena c (Hagen et al., 2011), and the average score for a feature type over all pairwise word combinations in c. IR performance predictors: Catenae take the same form as typical IR search terms. For this reason, we also use predictors of IR effectiveness previously applied to IR terms. In general, path and co-occurrence features are similar to those applied by Surdeanu et al. (2011) but we do not parse documents. Path features are also similar to Song et al. (2008), but more efficient and suited to units of variable length. Ellipsis features have not been used before. 5 Experimental setup 5.1 Classification Catenae selection is framed as a supervised classification problem trained on binary human judgments of informativeness: how well catenae represent a query and discriminate between relevant and non-relevant documents in a collection. Kappa for two annotators on catenae in 100 sample queries was 0.63, and test-retest reliability for individual judges was similar (0.62)3. Although this is low, human annotations produced consistently better classification accuracy than other labelling methods explored. We use the Weka (Hall et al., 2009) AdaBoost.M1 meta-classifier (Freund and Schapire, 1996) with unpruned C4.5 decision trees as base learners to classify catenae as informative or not. Adaboost.M1 boosts decisions over T weak learners for T features using weighted majority voting. At each round, predictions of a new learner are focused on incorrectly classified examples from the previous round. Adaboost.M1 was selected in preference to other algorithms because it performed better in preliminary experiments, leverages many weak features to advantage, and usually does not overfit (Schapire et al., 1997). Predictions are made using 10-fold crossvalidation. There are roughly three times the number of uninformative catenae compared to informative catenae. In addition, the number of training examples is small (1295 to 5163 per collection). To improve classifier accuracy, the training data for each collection is supplemented and balanced by generating examples from queries for 3Catenae, judgments and annotation details available at ciir.cs.umass.edu/˜tmaxwell 511 isSeq Minimum perplexity of ngrams with length 2, 3, and 4 in a window of up to a 3 words around the site of catenae omission. This is the area where ungrammaticality may be introduced. For the remainder R=`ABCDE&ABE' we compute ppl1 for I&ABE, &AB, ABE, &A, AB, BEJ. R_ppl1 R_strict Compliance with strict handKcoded rules for grammaticality of a remainder. Rules include unlikely orderings of punctuation and partKofK speech MPOSQ tags Me.g. ,, Q, poor placement of determiners and punctuation, and orphaned words, such as adjectives without the nouns they modify. R_relax A relaxed version of handKcoded rules for R_strict. Some rules were observed to be overly aggressive in detection of ungrammatical remainders. Ellipsis candidate features (E) Co-occurrence features (C) IR performance prediction features (I) c_ppl1 Dependency path features (D) (continued) Dependency paths traverse nodes including stopwords and may be filtered based on POS tags. We use perplexity for the sequence of POS tags in catenae before removing stopwords. This is computed using a POS language model built on ukWaC parsed wikipedia data MBaroni et al., 2009Q. phClass Phrasal class for a catena, with options NP, VP and Other. A catena has a NP or VP class if it is, or is entirely contained by, an NP or VP MSong et al., 2008Q. NP_split Unsuccessful ellipsis often results if elided words only partly describe a base NP. Boolean feature for presence of a partial NP in the remainder. NPs Mand PPsQ are identified using the MontyLingua toolkit. PP_split As for NP_split, defined for prepositional phrases (PP). F_split As for NP_split, defined for finite clauses. semRole Boolean feature indicating whether a catena describes all, or part of, a predicateKargument structure MPASQ. Previous work approximated PAS by using paths between head nouns and verbs, and all paths excluding those within base chunks. c_len Length of a stopped catenae. Longer terms tend to reduce IR recall. Boolean indicating if catena words are sequential in stoplisted surface text. cf_ow Frequency of a catena in the retrieval collection, words appearing ordered in a window the length of the catena. cf_uw As for cf_ow, but words may appear unordered. cf_uw8 As for cf_uw, but the window has a length of 8 words. idf_ow Inverse document frequency MidfQ where document frequency MdfQ of a catena is calculated using cf_ow windows. Let N be the number of documents in the retrieval collection, then: idf(Ci) = log2 N df(Ci) and idf(Ci) = N if df(Ci) = 0. idf_uw As for idf_ow, but words may appear unordered. idf_uw8 As for idf_uw, but the window has a length of 8 words. gf Google ngrams frequency MBrants and Franz, 2006Q from a web crawl of approximately one trillion English word tokens. Counts from a large collection are expected to be more reliable than those from smaller test collections. WIG Normalized Weighted Information Gain MWIGQ is the change in information over top ranked documents between a random ranked list and an actual ranked list retrieved with a catena c MZhou and Croft, 2007Q. wig(c) = 1 k ! d∈Dk(c) log p(c|d) −log p(c|C) −log p(c|C) where Dk are the top k=50 documents retrieved with catena c from collection C, and p(c|·) are maximum likelihood estimates. A second feature uses the average WIG score for all pairwise word combinations in c. qf_in Frequency of appearance in queries from the Live Search 2006 search query log Mapproximately 15 million queriesQ. Query log frequencies are a measure of the likelihood that a catena will appear in any query. wf_in As for qf_in, but using frequency counts in Wikipedia titles instead of queries. sepMode Most frequent separation distance of words in catena c in the retrieval collection, with possible values S = 81, 2, 3, long=. 1 means that all words are adjacent, 2 means separation by 0-1 words, and long means containment in a window of size 4 ∗|c|. H_c Entropy for separation distance s of words in catena c in the retrieval collection.fs is the frequency of c in window size s, and fS is the frequency of c in a window of size 4 ∗|c| . All f are normalized for catena length using |c||c| MHagen et al., 2011Q. Hc = ! s∈S fs + 0.5 fS + 0.5 log2 fs + 0.5 fS + 0.5 sepRatio Where fs and fS are defined as for H_c: sepRatioc = fs>2 + 0.5 fS + 0.5 wRatio For words w in catena c; fS is defined as for H_c. wRatioc = 0.5 + 1 |c| ! w∈c fw fS + 0.5 nomEnd Boolean indicating whether the words at each end of the catena are nouns Mor the catena is a single nounQ. Dependency path features (D) Table 2: Classifier features. 512 Feature Classes Pr ROB04 WT10G GOV2 D-CI E-CI E-D E-D-CI R 86.2 72.8 79.3 67.1 77.0 68.0 Pr R 83.5 67.5 76.9 59.7 70.9 61.8 Pr R 86.2 71.7 77.2 65.6 72.8 63.9 Pr R 86.2 72.0 79.6 66.1 75.5 67.2 Table 3: Average classifier precision (Pr) and recall (R) over 10 folds. Pr is % positive predictions that are correct. R is % positive labeled instances predicted as positive. A combination of all classes marginally performs best. other collections used in this paper, plus TREC8QA. For example, training data for Robust04 includes data from WT10G, GOV2 and TREC8QA. Any examples that replicate catenae in the test collection are excluded. For Robust04, WT10G and GOV2 respectively, 30%, 82% and 69% of the training data is derived from other collections. 5.2 Classification results Average classification precision and recall is shown in Table 3. Co-occurrence and IR effectiveness prediction features (CI) was the most influential class, and accounted for 70% of all features in the model. Performance is marginally better using all features (E-D-CI) with a moderate improvement over human agreement on the annotation task. The E-D-CI filter is used in subsequent experiments. Catenae were predicted for all queries. Predictions were more accurate for Robust04 than the other two collections. One potential explanation is that Robust04 queries are longer on average (up to 32 content words per query, compared to up to 16 words) so they generate a more diverse set of catenae that are more easily distinguished with respect to informativeness. The proportion of training data specific to the retrieval collection may also be a factor. Longer queries produce a greater number of catenae, so less training data from other collections is required. 6 Evaluation framework 6.1 Baseline IR models Baselines are a unigram query likelihood (QL) model (bag of words) and a highly effective sequential dependence (SD) variant of the Markov random field (MRF) model (Metzler and Croft, 2005). SD uses a linear combination of three cliques of terms, where each clique is prioritized by a weight λc. The first clique contains individual words (query likelihood QL), λ1 = 0.85. The second clique contains query bigrams that match document bigrams in 2-word ordered windows (‘#1’), λ2 = 0.1. The third clique uses the same bigrams as clique 2 with an 8-word unordered window (‘#uw8’), λ3 = 0.05. For example, the query new york city in Indri4 query language is: #weight( λ1 #combine(new york city) λ2 #combine(#1(new york) #1(york city)) λ3 #combine(#uw8(new york) #uw8(york city))) SD is a competitive baseline in IR (Bendersky and Croft, 2008; Park et al., 2011; Xue et al., 2010). Our reformulated model uses the same query format as SD, but the second and third cliques contain filtered catenae instead of query bigrams. In addition, because catenae may be multi-word units, we adjust the unordered window size to 4 ∗|c|. So, if two catenae ‘york’ and ‘new york city’ are selected, the last clique has the form: λ3 #combine( york #uw12(new york city)) This query representation enables word relations to be explicitly indicated while maintaining efficient and flexible matching of catenae in documents. Moreover, it does not use dependency relations between words during retrieval, so there is no need to parse a collection. 6.2 Baseline catenae selection We explore four filters for catenae. Three are based on previous work and describe heuristic features of promising catenae. The fourth is our novel supervised classifier. NomEnd: Catenae starting and ending with nouns, or containing only one word that is a noun. Paths between nouns are used by Lin and Pantel (2001). SemRol: Catenae in which all component words are either predicates or argument heads. This is based on work that uses paths between head nouns and verbs (Shen et al., 2005), semantic roles (Moschitti, 2008), and all dependency paths except those that occur between words in the same base chunk (e.g. noun / verb phrase) (Cui et al., 2005). GovDep: Cantenae containing words with a governor-dependent relation. Many IR models use this form of path filtering e.g. (Gao et al., 2004; Wang et al., 2007). Relations are ‘collapsed’ by removing stopwords to reduce the distance between content nodes in a dependency graph. 4http://www.lemurproject.org/ 513 ROBUST04 WT10G GOV2 MAP R-Pr MAP R-Pr MAP R-Pr QL 25.25 28.69 19.55 22.77 25.77 31.26 SD 26.57† 30.02† 20.63 24.31† 28.00† 33.30† NomEnd 25.91† 29.35‡ 20.81† 24.27† 27.41† 32.94† GovDep 26.26† 29.63† 21.06 24.23† 27.87† 33.51† SemRol 25.70† 29.06 19.78 22.93 26.76 32.49† SFeat 27.04† 30.11† 20.84† 24.31† 28.43† 33.84† SF-12 27.03† 30.20† 21.62† 24.81† 28.57† 34.01† Table 4: IR results using filtered catenae consistently improve over non-linguistic methods. Significance(p < .05) shown compared to QL (†) and SD (‡). ROBUST04 WT10G GOV2 MAP R-Pr MAP R-Pr MAP R-Pr SF-12 27.03 30.20 21.62 24.81 28.57 34.01 SF-123 26.83 30.34 21.34 24.64 28.77 34.24 SF-NE 26.51 29.86 21.42 24.55 27.96 33.26 SF-GD 26.22 29.48 20.33 23.72 28.30 33.83 Gold 27.92 31.15 22.56 25.69 29.65 35.08 Table 5: Results with supervised selection of catenae with specified length (SF-12, SF-123) are more effective than combinations of SFeat with heuristic NomEnd (SF-NE) or GovDep (SF-GD). 6.3 Experiments Experiments compare queries reformulated using catenae selected by baseline filters and our supervised selection method (SFeat) to SD and a bag-of-words model (QL). We also compare IR effectiveness of all catenae filtered using SFeat with approaches that combine SFeat with baseline filters. All models are implemented using the Indri retrieval engine version 4.12. 6.4 Results Results in Table 4 show significant improvement in mean average precision (MAP) of queries using catenae compared to QL. Consistent improvements over SD are also demonstrated for supervised selection applied to all catenae (SFeat) and catenae with only 1-2 words (SF-12) across all collections (Table 5). Overall, changes are small and fairly robust, with one half to two thirds of all queries showing less than 10% change in MAP. Unlike sFeat, other filters tend to decrease performance compared to SD. Governor-dependent relations for WT10G are an exception and we speculate that this is due to a negative influence of 3word catenae for this collection. Manual inspection suggests that WT10G queries are short and have relatively simple syntactic structure (e.g. few PP attachment ambiguities). This means that 3-word catenae (in all models except GovDep) tend to include uninformative words, such as ‘reasons’ in ‘fasting religious reasons’. In contrast, 3-word catenae in other collections tend to identify query subconcepts or phrases, such as ‘science plants water’. Classification results for catenae separated by length, such that the classifier for catenae with a specific length are trained on examples of catenae with the same length, confirm this intuition. The rejection rate for 3-word catenae is twice as high for WT10G as for other collections. It is also more difficult to distinguish informative 3-word catenae compared to catenae with 1-2 words. To assess the impact of classification accuracy on IR effectiveness, Table 5 shows results with oracle knowledge of annotator judgments. The SF-12 model combines catenae predicted for lengths 1 and 2. Its strong performance across all collections suggests that most of the benefit derived from catenae in IR is found in governor-dependent and single word units, where single words are important (GovDep uses only 2-word catenae). Another major observation (Table 5) is that mixing baseline heuristic filters with a supervised approach is not as successful as supervised selection alone. In particular, performance decreases for filtered governor-dependent pairs. This suggests that some important word relations in GovDep and NomEnd are captured by triangulation. Finally, we review selected catenae for queries that perform significantly better or worse than SD (> 75% change in MAP). The best IR effectiveness occurs when selected catenae clearly focus on the most important aspect of a query. Poor perfor514 mance is caused by a lack of focus in a catenae set, even though selected catenae are reasonable, or an emphasis on words that are not central to the query. The latter can occur when words that are not essential to query semantics appear in many catenae due to their position in the dependency graph. 7 Conclusion We presented a flexible implementation of dependency paths for long queries in ad hoc IR that does not require dependency parsing a collection. Our supervised selection technique for catenae addresses the need to balance a representation of language expressiveness with effective, efficient statistical methods. This is a core challenge in computational linguistics. It is not possible to directly compare performance of our approach with ad hoc techniques in IR that parse a retrieval collection. However, we note that a recent result using query translation based on dependency paths (Park et al., 2011) reports 14% improvement over query likelihood (QL). Our approach achieves 7% improvement over QL on the same collection. We conclude that catenae do not replace path-based techniques, but may offer some insight into their application, and have particular value when it is not practical to parse target documents to determine text similarity. Acknowledgments This work was supported in part by the Center for Intelligent Information Retrieval. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsor. References James Allan, Margaret E. Connell, W. Bruce Croft, Fang-Fang Feng, David Fisher, and Xiaoyan Li. 2000. INQUERY and TREC-9. In Proceedings of TREC-9, pages 551–562. Michael Bendersky and W. Bruce Croft. 2008. Discovering key concepts in verbose queries. In Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval, SIGIR ’08, pages 491–498, New York, NY, USA. ACM. Keke Cai, Jiajun Bu, Chun Chen, and Guang Qiu. 2007. A novel dependency language model for information retrieval. Journal of Zhejiang University SCIENCE A, 8(6):871–882. Hang Cui, Renxu Sun, Keya Li, Min-Yen Kan, and Tat-Seng Chua. 2005. Question answering passage retrieval using dependency relations. In Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval, SIGIR ’05, pages 400–407, New York, NY, USA. ACM. Marie-Catherine de Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In Proceedings of LREC-2006. Abdessamad Echihabi and Daniel Marcu. 2003. A noisy-channel approach to question answering. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1, ACL ’03, pages 16–23, Stroudsburg, PA, USA. Association for Computational Linguistics. Yoav Freund and Robert E. Schapire. 1996. Experiments with a new boosting algorithm. In ICML’96, pages 148–156. Jianfeng Gao, Jian-Yun Nie, Guangyuan Wu, and Guihong Cao. 2004. Dependence language model for information retrieval. In Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval, SIGIR ’04, pages 170–177, New York, NY, USA. ACM. Matthias Hagen, Martin Potthast, Benno Stein, and Christof Br¨autigam. 2011. Query segmentation revisited. In Proceedings of the 20th international conference on World wide web, WWW ’11, pages 97–106, New York, NY, USA. ACM. Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H. Witten. 2009. The weka data mining software: an update. SIGKDD Explorations Newsletter, 11:10–18, November. Michael Heilman and Noah A. Smith. 2010. Tree edit models for recognizing textual entailments, paraphrases, and answers to questions. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT ’10, pages 1011–1019, Stroudsburg, PA, USA. Association for Computational Linguistics. Richard Johansson and Pierre Nugues. 2008. Dependency-based syntactic–semantic analysis with PropBank and NomBank. In Proceedings of CoNNL 2008, pages 183–187. Changki Lee, Gary Geunbae Lee, and Myung-Gil Jang. 2006. Dependency structure language model for information retrieval. In In ETRI journal, volume 28, pages 337–346. Dekang Lin and Patrick Pantel. 2001. DIRT - discovery of inference rules from text. In Proceedings of ACM Conference on Knowledge Discovery and Data Mining (KDD-01), pages 323–328, San Francisco, CA. 515 Lo¨ıc Maisonnasse, Eric Gaussier, and Jean-Pierre Chevallet. 2007. Revisiting the dependence language model for information retrieval. In Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval, SIGIR ’07, pages 695–696, New York, NY, USA. ACM. Igor A. Mel’˘cuk. 2003. Levels of dependency in linguistic description: Concepts and problems. In V. Agel, L. Eichinger, H.-W. Eroms, P. Hellwig, H. J. Herringer, and H. Lobin, editors, Dependency and Valency. An International Handbook of Contemporary Research, volume 1, pages 188–229. Walter De Gruyter, Berlin–New York. Donald Metzler and W. Bruce Croft. 2005. A Markov random field model for term dependencies. In Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval, SIGIR ’05, pages 472–479, New York, NY, USA. ACM. Alessandro Moschitti. 2008. Kernel methods, syntax and semantics for relational text categorization. In Proceeding of the 17th ACM conference on Information and knowledge management, CIKM ’08, pages 253–262, New York, NY, USA. ACM. Joakim Nivre. 2005. Dependency grammar and dependency parsing. Technical report, V¨axj¨o University: School of Mathematics and Systems Engineering. Timothy Osborne and Thomas Groß. 2012. Constructions are catenae: Construction grammar meets dependency grammar. Cognitive Linguistics, 23(1):165–216. Timothy Osborne, Michael Putnam, and Groß. 2012. Catenae: Introducing a novel unit of syntactic analysis. Syntax, 15(4):354–396, December. Jae Hyun Park, W. Bruce Croft, and David A. Smith. 2011. A quasi-synchronous dependence model for information retrieval. In Proceedings of the 20th ACM international conference on Information and knowledge management, CIKM ’11, pages 17–26, New York, NY, USA. ACM. Ellen F. Prince. 1986. On the syntactic marking of presupposed open propositions. In Proceedings of the 22nd Annual Meeting of the Chicago Linguistic Society, pages 208–222. V. Punyakanok, D. Roth, and W. Yih. 2004. Mapping dependencies trees: An application to question answering. In Proceedings of AI and MATH Symposium 2004 (Special session: Intelligent Text Processing). Robert E. Schapire, Yoav Freund, Peter Bartlett, and Wee Sun Lee. 1997. Boosting the margin: A new explanation for the effectiveness of voting methods. In Proceedings of ICML, pages 322–330. Dan Shen, Geert-Jan M. Kruijff, and Dietrich Klakow. 2005. Exploring syntactic relation patterns for question answering. In Proceedings of the Second international joint conference on Natural Language Processing, IJCNLP’05, pages 507–518, Berlin, Heidelberg. Springer-Verlag. Fei Song and W. Bruce Croft. 1999. A general language model for information retrieval. In Proceedings of the 8th ACM international conference on Information and knowledge management, CIKM ’99, pages 316–321, New York, NY, USA. ACM. Young-In Song, Kyoung-Soo Han, Sang-Bum Kim, So-Young Park, and Hae-Chang Rim. 2008. A novel retrieval approach reflecting variability of syntactic phrase representation. Journal of Intelligent Information Systems, 31(3):265–286, December. Munirathnam Srikanth and Rohini Srihari. 2002. Biterm language models for document retrieval. In Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval, SIGIR ’02, pages 425–426, New York, NY, USA. ACM. Mark J. Steedman. 1990. Gapping as Constituent Coordination. Linguistics and Philosophy, 13(2):207– 263, April. Mihai Surdeanu, Massimiliano Ciaramita, and Hugo Zaragoza. 2011. Learning to rank answers to non-factoid questions from web collections. Computational Linguistics, 37(2):351–383, June. C. J. van Rijsbergen. 1993. A theoretical basis for the use of co-occurrence data in information retrieval. Journal of Documentation, 33(2):106–119. Mengqiu Wang, Noah A. Smith, and Teruko Mitamura. 2007. What is the Jeopardy model? a quasisynchronous grammar for QA. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 22–32, Prague, Czech Republic, June. Association for Computational Linguistics. Xiaobing Xue, Samuel Huston, and W. Bruce Croft. 2010. Improving verbose queries using subset distribution. In Proceedings of the 19th ACM international conference on Information and knowledge management, CIKM ’10, pages 1059–1068, New York, NY, USA. ACM. 516
2013
50
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 517–527, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Coordination Structures in Dependency Treebanks Martin Popel, David Mareˇcek, Jan ˇStˇep´anek, Daniel Zeman, Zdenˇek ˇZabokrtsk´y Charles University in Prague, Faculty of Mathematics and Physics Institute of Formal and Applied Linguistics ( ´UFAL) Malostransk´e n´amˇest´ı 25, CZ-11800 Praha, Czechia {popel|marecek|stepanek|zeman|zabokrtsky}@ufal.mff.cuni.cz Abstract Paratactic syntactic structures are notoriously difficult to represent in dependency formalisms. This has painful consequences such as high frequency of parsing errors related to coordination. In other words, coordination is a pending problem in dependency analysis of natural languages. This paper tries to shed some light on this area by bringing a systematizing view of various formal means developed for encoding coordination structures. We introduce a novel taxonomy of such approaches and apply it to treebanks across a typologically diverse range of 26 languages. In addition, empirical observations on convertibility between selected styles of representations are shown too. 1 Introduction In the last decade, dependency parsing has gradually been receiving visible attention. One of the reasons is the increased availability of dependency treebanks, be they results of genuine dependency annotation projects or converted automatically from previously existing phrase-structure treebanks. In both cases, a number of decisions have to be made during the construction or conversion of a dependency treebank. The traditional notion of dependency does not always provide unambiguous solutions, e.g. when it comes to attaching functional words. Worse, dependency representation is at a loss when it comes to representing paratactic linguistic phenomena such as coordination, whose nature is symmetric (two or more conjuncts play the same role), as opposed to the head-modifier asymmetry of dependencies.1 1We use the term modifier (or child) for all types of dependent nodes including arguments. The dominating solution in treebank design is to introduce artificial rules for the encoding of coordination structures within dependency trees using the same means that express dependencies, i.e., by using edges and by labeling of nodes or edges. Obviously, any tree-shaped representation of a coordination structure (CS) must be perceived only as a “shortcut” since relations present in coordination structures form an undirected cycle, as illustrated already by Tesni`ere (1959). For example, if a noun is modified by two coordinated adjectives, there is a (symmetric) coordination relation between the two conjuncts and two (asymmetric) dependency relations between the conjuncts and the noun. However, as there is no obvious linguistic intuition telling us which tree-shaped CS encoding is better and since the degree of freedom has several dimensions, one can find a number of distinct conventions introduced in particular dependency treebanks. Variations exist both in topology (tree shape) and labeling. The main goal of this paper is to give a systematic survey of the solutions adopted in these treebanks. Naturally, the interplay of dependency and coordination links in a single tree leads to serious parsing issues.2 The present study does not try to decide which coordination style is the best from the parsing point of view.3 However, we believe that our survey will substantially facilitate experiments in this direction in the future, at least by exploring and describing the space of possible candidates. 2CSs have been reported to be one of the most frequent sources of parsing errors (Green and ˇZabokrtsk´y, 2012; McDonald and Nivre, 2007; K¨ubler et al., 2009; Collins, 2003). Their impact on quality of dependency-based machine translation can also be substantial; as documented on an Englishto-Czech dependency-based translation system (Popel and ˇZabokrtsk´y, 2009), 39% of serious translation errors which are caused by wrong parsing have to do with coordination. 3There might be no such answer, as different CS conventions might serve best for different applications or for different parser architectures. 517 The rest of the paper is structured as follows. Section 2 describes some known problems related to CS. Section 3 shows possible “styles” for representing CS. Section 4 lists treebanks whose CS conventions we studied. Section 5 presents empirical observations on CS convertibility. Section 6 concludes the paper. 2 Related work Let us first recall the basic well-known characteristics of CSs. In the simplest case of a CS, a coordinating conjunction joins two (usually syntactically and semantically compatible) words or phrases called conjuncts. Even this simplest case is difficult to represent within a dependency tree because, in the words of Lombardo and Lesmo (1998): Dependency paradigms exhibit obvious difficulties with coordination because, differently from most linguistic structures, it is not possible to characterize the coordination construct with a general schema involving a head and some modifiers of it. Proper formal representation of CSs is further complicated by the following facts: • CSs with more than two conjuncts (multiconjunct CSs) exist and are frequent. • Besides “private” modifiers of individual conjuncts, there are modifiers shared by all conjuncts, such as in “Mary came and cried”. Shared modifiers may appear alongside with private modifiers of particular conjuncts. • Shared modifiers can be coordinated, too: “big and cheap apples and oranges”. • Nested (embedded) coordinations are possible: “John and Mary or Sam and Lisa”. • Punctuation (commas, semicolons, three dots) is frequently used in CSs, mostly with multi-conjunct coordinations or juxtapositions which can be interpreted as CSs without conjunctions (e.g. “Don’t worry, be happy!”). • In many languages, comma or other punctuation mark may play the role of the main coordinating conjunction. • The coordinating conjunction may be a multiword expression (“as well as”). • Deficient CSs with a single conjunct exist. • Abbreviations like “etc.” comprise both the conjunction and the last conjunct. • Coordination may form very intricate structures when combined with ellipsis. For example, a conjunct can be elided while its arguments remain in the sentence, such as in the following traditional example: “I gave the books to Mary and the records to Sue.” • The border between paratactic and hypotactic surface means of expressing coordination relations is fuzzy. Some languages can use enclitics instead of conjunctions/prepositions, e.g. Latin “Senatus Populusque Romanus”. Purely hypotactic surface means such as the preposition in “John with Mary” occur too.4 • Careful semantic analysis of CSs discloses additional complications: if a node is modified by a CS, it might happen that it is the node itself (and not its modifiers) what should be semantically considered as a conjunct. Note the difference between “red and white wine” (which is synonymous to “red wine and white wine”) and “red and white flag of Poland”. Similarly, “five dogs and cats” has a different meaning than “five dogs and five cats”. Some of these issues were recognized already by Tesni`ere (1959). In his solution, conjuncts are connected by vertical edges directly to the head and by horizontal edges to the conjunction (which constitutes a cycle in every CS). Many different models have been proposed since, out of which the following are the most frequently used ones: • MS = Mel’ˇcuk style used in the MeaningText Theory (MTT): the first conjunct is the head of the CS, with the second conjunct attached as a dependent of the first one, third conjunct under the second one, etc. Coordinating conjunction is attached under the penultimate conjunct, and the last conjunct is attached under the conjunction (Mel’ˇcuk, 1988), • PS = Prague Dependency Treebank (PDT) style: all conjuncts are attached under the coordinating conjunction (along with shared modifiers, which are distinguished by a special attribute) (Hajiˇc et al., 2006), 4As discussed by Stassen (2000), all languages seem to have some strategy for expressing coordination. Some of them lack the paratactic surface means (the so called WITHlanguages), but the hypotactic surface means are present almost always. 518 • SS = Stanford parser style:5 the first conjunct is the head and the remaining conjuncts (as well as conjunctions) are attached under it. One can find various arguments supporting the particular choices. MTT possesses a complex set of linguistic criteria for identifying the governor of a relation (see Mazziotta (2011) for an overview), which lead to MS. MS is preferred in a rule-based dependency parsing system of Lombardo and Lesmo (1998). PS is advocated by ˇStˇep´anek (2006) who claims that it can represent shared modifiers using a single additional binary attribute, while MS would require a more complex co-indexing attribute. An argumentation of Tratz and Hovy (2011) follows a similar direction: We would like to change our [MS] handling of coordinating conjunctions to treat the coordinating conjunction as the head [PS] because this has fewer ambiguities than [MS]. . . We conclude that the influence of the choice of coordination style is a well-known problem in dependency syntax. Nevertheless, published works usually focus only on a narrow ad-hoc selection of few coordination styles, without giving any systematic perspective. Choosing a file format presents a different problem. Despite various efforts to standardize linguistic annotation,6 no commonly accepted standard exists. The primitive format used for CoNLL shared tasks is widely used in dependency parsing, but its weaknesses have already been pointed out (cf. Straˇn´ak and ˇStˇep´anek (2010)). Moreover, particular treebanks vary in their contents even more than in their format, i.e. each treebank has its own way of representing prepositions or different granularity of syntactic labels. 3 Variations in representing coordination structures Our analysis of variations in representing coordination structures is based on observations from a set of dependency treebanks for 26 languages.7 5We use the already established MS-PS-SS distinction to facilitate literature overview; as shown in Section 3, the space of possible coordination styles is much richer. 6For example, TEI (TEI Consortium, 2013), PML (Hana and ˇStˇep´anek, 2012), SynAF (ISO 24615, 2010). 7The primary data sources are the following: Ancient Greek: Ancient Greek Dependency Treebank (Bamman and Crane, 2011), Arabic: Prague Arabic Dependency Treebank 1.0 (Smrˇz et al., 2008), Basque: Basque Dependency Treebank (larger version than CoNLL 2007 generously proIn accordance with the usual conventions, we assume that each sentence is represented by one dependency tree, in which each node corresponds to one token (word or punctuation mark). Apart from that, we deliberately limit ourselves to CS representations that have shapes of connected subgraphs of dependency trees. We limit our inventory of means of expressing CSs within dependency trees to (i) tree topology (presence or absence of a directed edge between two nodes, Section 3.1), and (ii) node labeling (additional attributes stored insided nodes, Section 3.2).8 Further, we expect that the set of possible variations can be structured along several dimensions, each of which corresponds to a certain simple characteristic (such as choosing the leftmost conjunct as the CS head, or attaching shared modifiers below the nearest conjunct). Even if it does not make sense to create the full Cartesian product of all dimensions because some values cannot be combined, it allows to explore the space of possible CS styles systematically.9 3.1 Topological variations We distinguish the following dimensions of topological variations of CS styles (see Figure 1): Family – configuration of conjuncts. We divide the topological variations into three main groups, labeled as Prague (fP), Moscow (fM), and vided by IXA Group) (Aduriz and others, 2003), Bulgarian: BulTreeBank (Simov and Osenova, 2005), Czech: Prague Dependency Treebank 2.0 (Hajiˇc et al., 2006), Danish: Danish Dependency Treebank (Kromann et al., 2004), Dutch: Alpino Treebank (van der Beek and others, 2002), English: Penn TreeBank 3 (Marcus et al., 1993), Finnish: Turku Dependency Treebank (Haverinen et al., 2010), German: Tiger Treebank (Brants et al., 2002), Greek (modern): Greek Dependency Treebank (Prokopidis et al., 2005), Hindi, Bengali and Telugu: Hyderabad Dependency Treebank (Husain et al., 2010), Hungarian: Szeged Treebank (Csendes et al., 2005), Italian: Italian Syntactic-Semantic Treebank (Montemagni and others, 2003), Latin: Latin Dependency Treebank (Bamman and Crane, 2011), Persian: Persian Dependency Treebank (Rasooli et al., 2011), Portuguese: Floresta sint´a(c)tica (Afonso et al., 2002), Romanian: Romanian Dependency Treebank (C˘al˘acean, 2008), Russian: Syntagrus (Boguslavsky et al., 2000), Slovene: Slovene Dependency Treebank (Dˇzeroski et al., 2006), Spanish: AnCora (Taul´e et al., 2008), Swedish: Talbanken05 (Nilsson et al., 2005), Tamil: TamilTB (Ramasamy and ˇZabokrtsk´y, 2012), Turkish: METU-Sabanci Turkish Treebank (Atalay et al., 2003). 8Edge labeling can be trivially converted to node labeling in tree structures. 9The full Cartesian product of variants in Figure 1 would result in topological 216 variants, but only 126 are applicable (the inapplicable combinations are marked with “—” in Figure 1). Those 126 topological variants can be further combined with labeling variants defined in Section 3.2. 519 Main family Prague family (code fP) [14 treebanks] Moscow family (code fM) [5 treebanks] Stanford family (code fS) [6 treebanks] Choice of head Head on left (code hL) [10 treebanks] dogs and , cats rats , cats and rats dogs Head on right (code hR) [14 treebanks] Mixed head (code hM) [1 treebank] A mixture of hL and hR Attachment of shared modifiers Shared modifier below the nearest conjunct (code sN) [15 treebanks] Shared modifier below head (code sH) [11 treebanks] lazy lazy dogs , cats rats and lazy dogs , la lazy dogs , and cats rats lazy dogs , cats and rats Attachment of coordinating conjunction Coordinating conjunction below previous conjunct (code cP) [2 treebanks] — dogs and rats , cats , cats rats dogs and Coordinating conjunction below following conjunct (code cF) [1 treebank] — and rats dogs rats , cats and , cats rats dogs and Coordinating conjunction between two conjuncts (code cB) [8 treebanks] — dogs and , cats rats , cats and rats dogs Coordinating conjunction as the head (code cH) is the only applicable style for the Prague family [14 treebanks] — — Placement of punctuation values pP [7 treebanks], pF [1 treebank] and pB [15 treebanks] are analogous to cP, cF and cB (but applicable also to the Prague family) Figure 1: Different coordination styles, variations in tree topology. Example phrase: “(lazy) dogs, cats and rats”. Style codes are described in Section 3.1. Stanford (fS) families.10 This first dimension distinguishes the configuration of conjuncts: in the Prague family, all the conjuncts are siblings governed by one of the conjunctions (or a punctuation fulfilling its role); in the Moscow family, the conjuncts form a chain where each node in the chain depends on the previous (or following) node; in the Stanford family, the conjuncts are siblings except for the first (or last) conjunct, which is the 10Names are chosen purely as a mnemonic device, so that Prague Dependency Treebank belongs to the Prague family, Mel’ˇcuk style belongs to the Moscow family, and Stanford parser style belongs to the Stanford family. head.11 Choice of head – leftmost or rightmost. In the Prague family, the head can be either the leftmost12 (hL) or the rightmost (hR) conjunction or punctuation. Similarly, in the Moscow and Stanford families, the head can be either the leftmost (hL) or the rightmost (hR) conjunct. A third op11Note that for CSs with just two conjuncts, fM and fS may look exactly the same (depending on the attachment of conjunctions and punctuation as described below). 12For simplicity, we use the terms left and right even if their meaning is reversed for languages with right-to-left writing systems such as Arabic or Persian. 520 tion (hM) is to mix hL and hR based on some criterion, e.g. the Persian treebank uses hR for coordination of verbs and hL otherwise. For the experiments in Section 5, we choose the head which is closer to the parent of the whole CS, with the motivation to make the edge between CS head and its parent shorter, which may improve parser training. Attachment of shared modifiers. Shared modifiers may appear before the first conjunct or after the last one. Therefore, it seems reasonable to attach shared modifiers either to the CS head (sH), or to the nearest (i.e. first or last) conjunct (sN). Attachment of coordinating conjunctions. In the Moscow family, conjunctions may be either part of the chain of conjuncts (cB), or they may be put outside of the chain and attached to the previous (cP) or following (cF) conjunct. In the Stanford family, conjunctions may be either attached to the CS head (and therefore between conjuncts) (cB), or they may be attached to the previous (cP) or the following (cF) conjunct. The cB option in both Moscow and Stanford families, treats conjunctions in the same way as conjuncts (with respect to topology only). In the Prague family, there is just one option available (cH) – one of the conjunctions is the CS head while the others are attached to it. Attachment of punctuation. Punctuation tokens separating conjuncts (commas, semicolons etc.) could be treated the same way as conjunctions. However, in most treebanks it is treated differently, so we consider it as well. The values pP, pF and pB are analogous to cP, cF and cB except that punctuation may be also attached to the conjunction in case of pP and pF (otherwise, a comma before the conjunction would be non-projectively attached to the member following the conjunction). The three established styles mentioned in Section 2 can be defined in terms of the newly introduced abbreviations: PS = fPhRsHcHpB, MS = fMhLsNcBp?, and SS = fShLsNcBp?.13 3.2 Labeling variations Most state-of-the-art dependency parsers can produce labeled edges. However, the parsers produce only one label per edge. To fully capture CSs, we need more than one label, because there are several aspects involved (see the initial assump13The question marks indicate that the original Mel’ˇcuk and Stanford parser styles ignore punctuation. tions in Section 3): We need to identify the coordinating conjunction (its POS tag might not be enough), conjuncts, shared modifiers, and punctuation that separates conjuncts. Besides that, there should be a label classifying the dependency relation between the CS and its parent. Some of the information can be retrieved from the topology of the tree and the “main label” of each node, but not everything. The additional information can be attached to the main label, but such approach obscures the logical structure. In the Prague family, there are two possible ways to label a conjunction and conjuncts: Code dU (“dependency labeled at the upper level of the CS”). The dependency relation of the whole CS to its parent is represented by the label of the conjunction, while the conjuncts are marked with a special label for conjuncts (e.g. ccof in the Hyderabad Dependency Treebank). Code dL (“lower level”). The CS is represented by a coordinating conjunction (or punctuation if there is no conjunction) with a special label (e.g. Coord in PDT). Subsequently, each conjunct has its own label that reflects the dependency relation towards the parent of the whole CS, therefore, conjuncts of the same CS can have different labels, e.g. “Who[SUBJ] and why[ADV] did it?” Most Prague family treebanks use sH, i.e. shared modifiers are attached to the head (coordinating conjunction). Each child of the head has to belong to one of three sets: conjuncts, shared modifiers, and punctuation or additional conjunctions. In PDT, conjuncts, punctuation and additional conjunctions are recognized by specific labels. Any other children of the head are shared modifiers. In the Stanford and Moscow families, one of the conjuncts is the head. In practice, it is never labeled as a conjunct explicitly, because the fact that it is a conjunct can be deduced from the presence of conjuncts among its children. Usually, the other conjuncts are labeled as conjuncts; conjunctions and punctuation also have a special label. This type of labeling corresponds to the dU type. Alternatively (as found in the Turkish treebank, dL), all conjuncts in the Moscow chain have their own dependency labels and the fact that they are conjuncts follows from the COORDINATION labels of the conjunction and punctuation nodes between them. To represent shared modifiers in the Stan521 ford and Moscow families, an additional label is needed again to distinguish between private and shared modifiers since they cannot be distinguished topologically. Moreover, if nested CSs are allowed, a binary label is not sufficient (i.e. “shared” versus “private”) because it also has to indicate which conjuncts the shared modifier belongs to.14 We use the following binary flag codes for capturing which CS participants are distinguished in the annotation: m01 = shared modifiers annotated; m10 = conjuncts annotated; m11 = both annotated; m00 = neither annotated. 4 Coordination Structures in Treebanks In this section, we identify the CS styles defined in the previous section as used in the primary treebank data sources; statistical observations (such as the amount of annotated shared modifiers) presented here, as well as experiments on CS-style convertibility presented in Section 5.2, are based on the normalized shapes of the treebanks as contained in the HamleDT 1.0 treebank collection (Zeman et al., 2012).15 Some of the treebanks were downloaded individually from the web, but most of them came from previously published collections for dependency parsing campaigns: six languages from CoNLL-2006 (Buchholz and Marsi, 2006), seven languages from CoNLL-2007 (Nivre et al., 2007), two languages from CoNLL-2009 (Hajiˇc and others, 2009), three languages from ICON-2010 (Husain et al., 2010). Obviously, there is a certain risk that the CS-related information contained in the source treebanks was slightly biased by the properties of the CoNLL format upon conversion. In addition, many of the treebanks were natively dependency-based (cf. the 2nd column of Table 1), but some were originally based on constituents and thus specific converters to the CoNLL format had to be created (for instance, the Spanish phrase-structure trees were converted to dependencies using a procedure described by Civit et al. (2006); similarly, treebank-specific converters have been used for other languages). Again, 14This is not needed in Prague family where shared modifiers are attached to the conjunction provided that each shared modifier is shared by conjuncts that form a full subtree together with their coordinating conjunctions; no exceptions were found during the annotation process of the PDT. 15A subset of the treebanks whose license terms permit redistribution is available directly at http://ufal.mff.cuni.cz/hamledt/. Danish Romanian hunde rotter , katte og c tter câini şi pisici şobolani Hungarian kutyák , macskák és patkányok Figure 2: Annotation styles of a few treebanks do not fit well into the multidimensional space defined in Section 3.1. there is some risk that the CS-related information contained in treebanks resulting from such conversions is slightly different from what was intended in the very primary annotation. There are several other languages (e.g. Estonian or Chinese) which are not included in our study, despite of the fact that constituency treebanks do exist for them. The reason is that the choice of their CS style would be biased, because no independent converters exist – we would have to convert them to dependencies ourselves. We also know about several more dependency treebanks that we have not processed yet. Table 1 shows 26 languages whose treebanks we have studied from the viewpoint of their CS styles. It gives the basic quantitative properties of the treebanks, their CS style in terms of the taxonomy introduced in Section 3, as well as statistics related to CSs: the average number of CSs per 100 tokens, the average number of conjuncts per one CS, the average number of shared modifiers per one CS,16 and the percentage of nested CSs among all CSs. The reader can return to Figure 1 to see the basic statistics on the “popularity” of individual design decisions among the developers of dependency treebanks or constituency treebank converters. CS styles of most treebanks are easily classifiable using the codes introduced in Section 3, plus a few additional codes: • p0 = punctuation was removed from the treebank. 16All non-Prague family treebanks are marked sN and m00 or m10, (i.e. shared modifiers not marked in the original annotation, but attached to the head conjunct) because we found no counterexamples (modifiers attached to a conjunct, but not the nearest one). The HamleDT normalization procedure contains a few heuristics to detect shared modifiers, but it cannot recover the missing distinction reliably, so the numbers in the “SMs/CJ” column are mostly underestimated. 522 Language Orig. Data Sents. Tokens Original CS CSs / CJs / SMs / Nested RT type set style code 100 tok. CS CS CS[%] UAS Ancient Greek dep prim. 31 316 461 782 fP hR sH cH pB dL m11 6.54 2.17 0.16 10.3 97.86 Arabic dep C07 3 043 116 793 fP hL sH cH pB dL m00 3.76 2.42 0.13 10.6 96.69 Basque dep prim. 11 225 151 593 fP hR sN cH pP dU m00 3.37 2.09 0.03 5.1 99.32 Bengali dep I10 1 129 7 252 fP hR sH cH pP dU m11 4.87 1.71 0.05 24.1 99.97 Bulgarian phr C06 13 221 196 151 fS hL sN cB pB dU m10 2.99 2.19 0.00 0.0 99.74 Czech dep C07 25 650 437 020 fP hR sH cH pB dL m11 4.09 2.16 0.20 14.6 99.42 Danish dep C06 5 512 100 238 fS* hL sN cP pB dU m10 3.68 1.93 0.13 7.5 99.76 Dutch phr C06 13 735 200 654 fP hR sN cH pP dU m10 2.06 2.17 0.05 3.3 99.47 English phr C07 40 613 991 535 fP hR sH cH pB dU m10 2.07 2.33 0.05 6.3 99.84 Finnish dep prim. 4 307 58 576 fS hL sN cB pB dU m10 4.06 2.41 0.00 6.4 99.70 German phr C09 38 020 680 710 fM hL sN cP pP dU m10 2.79 2.09 0.01 0.0 99.73 Greek dep C07 2 902 70 223 fP hR sH cH pB dL m11 3.25 2.48 0.18 7.2 99.43 Hindi dep I10 3 515 77 068 fP hR sH cH pP dU m11 2.45 1.97 0.04 10.3 98.35 Hungarian phr C07 6 424 139 143 fT hX sN cX pX dL m00 2.37 1.90 0.01 2.2 99.84 Italian dep C07 3 359 76 295 fS hL sN cB pB dU m10 3.32 2.02 0.03 3.8 99.51 Latin dep prim. 3 473 53 143 fP hR sH cH pB dL m11 6.74 2.24 0.41 12.3 97.45 Persian dep prim. 12 455 189 572 fM*hM sN cB pP dU m00 4.18 2.10 0.18 3.7 99.82 Portuguese phr C06 9 359 212 545 fS hL sN cB pB dU m10 2.51 1.95 0.26 11.1 99.16 Romanian dep prim. 4 042 36 150 fP* hR sN cH p0 dU m10 1.80 2.00 0.00 0.0 100.00 Russian dep prim. 34 895 497 465 fM hL sN cB p0 dU m10 4.02 2.02 0.07 3.9 99.86 Slovene dep C06 1 936 35 140 fP hR sH cH pB dL m00 4.31 2.49 0.00 10.8 98.87 Spanish phr C09 15 984 477 810 fS hL sN cB pB dU m10 2.79 1.98 0.14 12.7 99.24 Swedish phr C06 11 431 197 123 fM hL sN cF pF dU m10 3.94 2.19 0.13 0.7 99.66 Tamil dep prim. 600 9 581 fP hR sH cH pB dL m11 1.66 2.46 0.22 3.8 99.67 Telugu dep I10 1 450 5 722 fP hR sH cH pP dU m11 3.48 1.59 0.06 5.0 100.00 Turkish dep C07 5 935 69 695 fM hR sN cB pB dL m10 3.81 2.04 0.00 34.3 99.23 Table 1: Overview of analyzed treebanks. prim. = primary source; C06–C09 = CoNLL 2006–2009; I10 = ICON 2010; SM = shared modifier; CJ = conjunct; Nested CS = portion of CSs participating in nested CSs (both as the inner and outer CS); RT UAS = unlabeled attachment score of the roundtrip experiment described in Section 5. Style codes are defined in Sections 3 and 4. • fM* = Persian treebank uses a mix of fM and fS: fS for coordination of verbs and fM otherwise. Figure 2 shows three other anomalies: • fS* = Danish treebank employs a mixture of fS and fM, where the last conjunct is attached indirectly via the conjunction. • fP* = Romanian treebank omits punctuation tokens and multi-conjunct coordinations get split. • fT = Hungarian Szeged treebank uses “Tesni`ere family” – disconnected graphs for CSs where conjuncts (and conjunction and punctuation) are attached directly to the parent of CS, and so the other style dimensions are not applicable (hX, cX, pX). 5 Empirical Observations on Convertibility of Coordination Styles The various styles cannot represent the CS-related information to the same extent. For example, it is not possible to represent nested CSs in the Moscow and Stanford families without significantly changing the number of possible labels.17 The dL style (which is most easily applicable to the Prague family) can represent coordination of different dependency relations. This is again not possible in the other styles without adding e.g. a special “prefix” denoting the relations. We can see that the Prague family has a greater expressive power than the other two families: it can represent complex CSs using just one additional binary label, distinguishing between shared modifiers and conjuncts. A similar additional label is needed in the other styles to distinguish between shared and private modifiers. Because of the different expressive power, converting a CS from one style to another may lead to a loss of information. For example, as 17Mel’ˇcuk uses “grouping” to nest CSs – cf. related solutions involving coindexing or bubble trees (Kahane, 1997). However, these approaches were not used in any of the researched treebanks. To combine grouping with shared modifiers, each group in a tree should have a different identifier. 523 there is no way of representing shared modifiers in the Moscow family without an additional attribute, converting a CS with shared modifiers from Prague to Moscow family makes the modifiers private. When converting back, one can use certain heuristics to handle the most obvious cases, but sometimes the modifiers will stay private (very often, the nature of a modifier depends on context or is debatable even for humans, e.g. “Young boys and girls”). 5.1 Transformation algorithm We developed an algorithm to transform one CS style to another. Two subtasks must be solved by the algorithm: identification of individual CSs and their participants, and transforming of the individual CSs. Obviously, the individual CSs cannot be transformed independently because of coordination nesting. For instance, when transforming a nested coordination from the Prague style to the Moscow style (e.g. to fMhL), the leftmost conjunct in the inner (lower) coordination must climb up to become the head of the inner CS, but then it must climb up once again to become the head of the outer (upper) CS too. This shows that inner CSs must be transformed first. We tackle this problem by a depth-first recursion. When going down the tree, we only recognize all the participants of the CSs, classify them and gather them in a separate data structure (one for each visited CS). The following four types of CS participants are distinguished: coordinating conjunctions, conjuncts, shared modifiers, and punctuations that separate conjuncts.18 No change of the tree is performed during these descent steps. When returning back from the recursion (i.e., when climbing from a node back up to its parent), we test whether the abandoned node is the topmost node of some CS. If so, then this CS is transformed, which means that its participants are rehanged and relabelled according the the target CS style. This procedure naturally guarantees that the in18Conjuncts are explicitly marked in most styles. Coordinating conjunctions can be usually identified with the help of dependency labels and POS tags. Punctuation separating conjuncts can be detected with high accuracy using simple rules. If shared modifiers are not annotated (code m00 or m10), one can imagine rule-based heuristics or special classifiers trained to distinguish shared modifiers. For the experiments in this section, we use the HamleDT gold annotation attribute is shared modifier. ner CSs are transformed first and that all CSs are transformed when the recursions returns to the root. 5.2 Roundtrip experiment The number of possible conversion directions obviously grows quadratically with the number of styles. So far, we limited ourselves only to conversions from/to the style of the HamleDT treebank collection, which contains all the treebanks under our study already converted into a common scheme. The common scheme is based on the conventions of PDT, whose CS style is fPhRsHcHpB.19 We selected nine styles (3 families times 3 head choices) and transformed all the HamleDT scheme treebanks to these nine styles and back, which we call a roundtrip. Resulting averaged unlabeled attachment scores (UAS, evaluated against the HamleDT scheme) in the last column of Table 1 indicate that the percentage of transformation errors (i.e. tokens attached to a different parent after the roundtrip) is lower than 1% for 20 out of the 26 languages.20 A manual inspection revealed two main error sources. First, as noted above, the Stanford and Moscow families have lower expressive power than the Prague family, so naturally, the inverse transformation was ambiguous and the transformation heuristics were not capable of identifying the correct variant every time. Second, we also encountered inconsistencies in the original treebanks (which we were not trying to fix in HamleDT for now). 6 Conclusions and Future Work We described a (theoretically very large) space of possible representations of CSs within the dependency framework. We pointed out a range of details that make CSs a really complex phenomenon; anyone dealing with CSs in treebanking should take these observations into account. We proposed a taxonomy of those approaches 19As documented in Zeman et al. (2012), the normalization procedures used in HamleDT embrace many other phenomena as well (not only those related to coordination), and involve both structural transformation and dependency relation relabeling. 20Table 1 shows that Latin and Ancient Greek treebanks have on average more than 6 CSs per 100 tokens, more than 2 conjuncts per CS, and Latin has also the highest number of shared modifiers per CS. Therefore the percentage of nodes affected by the roundtrip is the highest for these languages and the lower roundtrip UAS is not surprising. 524 that have been argued for in literature or employed in real treebanks. We studied 26 existing treebanks of different languages. For each value of each dimension in Figure 1, we found at least one treebank where the value is used; even so, several treebanks take their own unique path that cannot be clearly classified under the taxonomy (the taxonomy could indeed be extended, for the price of being less clearly arranged). We discussed the convertibility between the various styles and implemented a universal tool that transforms between any two styles of the taxonomy. The tool achieves a roundtrip accuracy close to 100%. This is important because it opens the door to easily switching coordination styles for parsing experiments, phrase-to-dependency conversion etc. While the focus of this paper is to explore and describe the expressive power of various annotation styles, we did not address the learnability of the styles by parsers. That will be a complementary point of view, and thus a natural direction of future work for us. Acknowledgments We thank the providers of the primary data resources. The work on this project was supported by the Czech Science Foundation grants no. P406/11/1499 and P406/2010/0875, and by research resources of the Charles University in Prague (PRVOUK). This work has been using language resources developed and/or stored and/or distributed by the LINDAT-Clarin project of the Ministry of Education of the Czech Republic (project LM2010013). Further, we would like to thank Jan Hajiˇc, Ondˇrej Duˇsek and four anonymous reviewers for many useful comments on the manuscript of this paper. References Itzair Aduriz et al. 2003. Construction of a Basque dependency treebank. In Proceedings of the 2nd Workshop on Treebanks and Linguistic Theories. Susana Afonso, Eckhard Bick, Renato Haber, and Diana Santos. 2002. “Floresta sint´a(c)tica”: a treebank for Portuguese. In LREC, pages 1968–1703. Nart B. Atalay, Kemal Oflazer, and Bilge Say. 2003. The annotation process in the Turkish treebank. In Proceedings of the 4th Intern. Workshop on Linguistically Interpreteted Corpora (LINC). David Bamman and Gregory Crane. 2011. The Ancient Greek and Latin dependency treebanks. In Language Technology for Cultural Heritage, Theory and Applications of Natural Language Processing, pages 79–98. Springer Berlin Heidelberg. Igor Boguslavsky, Svetlana Grigorieva, Nikolai Grigoriev, Leonid Kreidlin, and Nadezhda Frid. 2000. Dependency treebank for Russian: Concept, tools, types of information. In Proceedings of the 18th conference on Computational linguistics-Volume 2, pages 987–991. Association for Computational Linguistics Morristown, NJ, USA. Sabine Brants, Stefanie Dipper, Silvia Hansen, Wolfgang Lezius, and George Smith. 2002. The TIGER treebank. In Proceedings of the Workshop on Treebanks and Linguistic Theories, Sozopol. Sabine Buchholz and Erwin Marsi. 2006. CoNLL-X shared task on multilingual dependency parsing. In Proceedings of CoNLL, pages 149–164. Montserrat Civit, Maria Ant`onia Mart´ı, and N´uria Buf´ı. 2006. Cat3LB and Cast3LB: From constituents to dependencies. In FinTAL, volume 4139 of Lecture Notes in Computer Science, pages 141–152. Springer. Michael Collins. 2003. Head-driven statistical models for natural language parsing. Computational linguistics, 29(4):589–637. D´ora Csendes, J´anos Csirik, Tibor Gyim´othy, and Andr´as Kocsor. 2005. The Szeged treebank. In TSD, volume 3658 of Lecture Notes in Computer Science, pages 123–131. Springer. Mihaela C˘al˘acean. 2008. Data-driven dependency parsing for Romanian. Master’s thesis, Uppsala University, August. Saˇso Dˇzeroski, Tomaˇz Erjavec, Nina Ledinek, Petr Pajas, Zdenˇek ˇZabokrtsk´y, and Andreja ˇZele. 2006. Towards a Slovene dependency treebank. In LREC 2006, pages 1388–1391, Genova, Italy. European Language Resources Association (ELRA). Nathan Green and Zdenˇek ˇZabokrtsk´y. 2012. Hybrid combination of constituency and dependency trees into an ensemble dependency parser. In Proceedings of the Workshop on Innovative Hybrid Approaches to the Processing of Textual Data, pages 19–26, Avignon, France. Association for Computational Linguistics. Jan Hajiˇc, Jarmila Panevov´a, Eva Hajiˇcov´a, Petr Sgall, Petr Pajas, Jan ˇStˇep´anek, Jiˇr´ı Havelka, Marie Mikulov´a, Zdenˇek ˇZabokrtsk´y, and Magda ˇSevˇc´ıkov´a-Raz´ımov´a. 2006. Prague Dependency Treebank 2.0. CD-ROM, Linguistic Data Consortium, LDC Catalog No.: LDC2006T01, Philadelphia. 525 Jan Hajiˇc et al. 2009. The CoNLL-2009 shared task: Syntactic and semantic dependencies in multiple languages. In Proceedings of the 13th Conference on Computational Natural Language Learning (CoNLL-2009), June 4-5, Boulder, Colorado, USA. Jirka Hana and Jan ˇStˇep´anek. 2012. Prague markup language framework. In Proceedings of the Sixth Linguistic Annotation Workshop, pages 12– 21, Stroudsburg, PA, USA. Association for Computational Linguistics, Association for Computational Linguistics. Katri Haverinen, Timo Viljanen, Veronika Laippala, Samuel Kohonen, Filip Ginter, and Tapio Salakoski. 2010. Treebanking Finnish. In Proceedings of the Ninth International Workshop on Treebanks and Linguistic Theories (TLT9), pages 79–90. Samar Husain, Prashanth Mannem, Bharat Ambati, and Phani Gadde. 2010. The ICON-2010 tools contest on Indian language dependency parsing. In Proceedings of ICON-2010 Tools Contest on Indian Language Dependency Parsing, Kharagpur, India. ISO 24615. 2010. Language resource management – Syntactic annotation framework (SynAF). Sylvain Kahane. 1997. Bubble trees and syntactic representations. In Proceedings of the 5th Meeting of the Mathematics of the Language, DFKI, Saarbrucken. Matthias T. Kromann, Line Mikkelsen, and Stine Kern Lynge. 2004. Danish dependency treebank. Sandra K¨ubler, Erhard Hinrichs, Wolfgang Maier, and Eva Klett. 2009. Parsing coordinations. In Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009), pages 406–414, Athens, Greece, March. Association for Computational Linguistics. Vincenzo Lombardo and Leonardo Lesmo. 1998. Unit coordination and gapping in dependency theory. In Processing of Dependency-Based Grammars; proceedings of the workshop. COLING-ACL, Montreal. Mitchell Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: the Penn Treebank. Computational Linguistics, 19:313–330. Nicolar Mazziotta. 2011. Coordination of verbal dependents in Old French: Coordination as a specified juxtaposition or apposition. In Proceedings of International Conference on Dependency Linguistics (DepLing 2011). Ryan McDonald and Joakim Nivre. 2007. Characterizing the errors of data-driven dependency parsing models. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 122–131. Igor A. Mel’ˇcuk. 1988. Dependency Syntax: Theory and Practice. State University of New York Press. Simonetta Montemagni et al. 2003. Building the Italian syntactic-semantic treebank. In Building and using Parsed Corpora, Language and Speech series, pages 189–210, Dordrecht. Kluwer. Jens Nilsson, Johan Hall, and Joakim Nivre. 2005. MAMBA meets TIGER: Reconstructing a Swedish treebank from antiquity. In Proceedings of the NODALIDA Special Session on Treebanks. Joakim Nivre, Johan Hall, Sandra K¨ubler, Ryan McDonald, Jens Nilsson, Sebastian Riedel, and Deniz Yuret. 2007. The CoNLL 2007 shared task on dependency parsing. In Proceedings of the CoNLL 2007 Shared Task. EMNLP-CoNLL, June. Martin Popel and Zdenˇek ˇZabokrtsk´y. 2009. Improving English-Czech Tectogrammatical MT. The Prague Bulletin of Mathematical Linguistics, (92):1–20. Prokopis Prokopidis, Elina Desipri, Maria Koutsombogera, Harris Papageorgiou, and Stelios Piperidis. 2005. Theoretical and practical issues in the construction of a Greek dependency treebank. In Proceedings of the 4th Workshop on Treebanks and Linguistic Theories (TLT), pages 149–160. Loganathan Ramasamy and Zdenˇek ˇZabokrtsk´y. 2012. Prague dependency style treebank for Tamil. In Proceedings of LREC 2012, pages 23–25, ˙Istanbul, Turkey. European Language Resources Association. Mohammad Sadegh Rasooli, Amirsaeid Moloodi, Manouchehr Kouhestani, and Behrouz MinaeiBidgoli. 2011. A syntactic valency lexicon for Persian verbs: The first steps towards Persian dependency treebank. In 5th Language & Technology Conference (LTC): Human Language Technologies as a Challenge for Computer Science and Linguistics, pages 227–231, Pozna´n, Poland. Kiril Simov and Petya Osenova. 2005. Extending the annotation of BulTreeBank: Phase 2. In The Fourth Workshop on Treebanks and Linguistic Theories (TLT 2005), pages 173–184, Barcelona, December. Otakar Smrˇz, Viktor Bielick´y, Iveta Kouˇrilov´a, Jakub Kr´aˇcmar, Jan Hajiˇc, and Petr Zem´anek. 2008. Prague Arabic dependency treebank: A word on the million words. In Proceedings of the Workshop on Arabic and Local Languages (LREC) 2008, pages 16–23, Marrakech, Morocco. European Language Resources Association. Leon Stassen. 2000. And-languages and withlanguages. Linguistic Typology, 4(1):1–54. Jan ˇStˇep´anek. 2006. Capturing a Sentence Structure by a Dependency Relation in an Annotated Syntactical Corpus (Tools Guaranteeing Data Consistence) (in Czech). Ph.D. thesis, Charles Univer526 sity in Prague, Faculty of Mathematics and Physics, Prague, Czech Republic. Pavel Straˇn´ak and Jan ˇStˇep´anek. 2010. Representing layered and structured data in the CoNLL-ST format. In Alex Fang, Nancy Ide, and Jonathan Webster, editors, Proceedings of the Second International Conference on Global Interoperability for Language Resources, pages 143–152, Hong Kong, China. City University of Hong Kong, City University of Hong Kong. Mariona Taul´e, Maria Ant`onia Mart´ı, and Marta Recasens. 2008. AnCora: Multilevel annotated corpora for Catalan and Spanish. In LREC. European Language Resources Association. TEI Consortium. 2013. TEI P5: Guidelines for Electronic Text Encoding and Interchange. Lucien Tesni`ere. 1959. El´ements de syntaxe structurale. Paris. Stephen Tratz and Eduard Hovy. 2011. A fast, accurate, non-projective, semantically-enriched parser. In Proceedings of EMNLP, pages 1257–1268, Edinburgh, Scotland, UK, July. Association for Computational Linguistics. Leonoor van der Beek et al. 2002. Chapter 5. The Alpino dependency treebank. In Algorithms for Linguistic Processing NWO PIONIER Progress Report, Groningen, The Netherlands. Daniel Zeman, David Mareˇcek, Martin Popel, Loganathan Ramasamy, Jan ˇStˇep´anek, Zdenˇek ˇZabokrtsk´y, and Jan Hajiˇc. 2012. HamleDT: To parse or not to parse? In Proceedings of LREC 2012, pages 2735–2741, ˙Istanbul, Turkey. European Language Resources Association. 527
2013
51
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 528–538, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics GlossBoot: Bootstrapping Multilingual Domain Glossaries from the Web Flavio De Benedictis, Stefano Faralli and Roberto Navigli Dipartimento di Informatica Sapienza Universit`a di Roma [email protected],{faralli,navigli}@di.uniroma1.it Abstract We present GlossBoot, an effective minimally-supervised approach to acquiring wide-coverage domain glossaries for many languages. For each language of interest, given a small number of hypernymy relation seeds concerning a target domain, we bootstrap a glossary from the Web for that domain by means of iteratively acquired term/gloss extraction patterns. Our experiments show high performance in the acquisition of domain terminologies and glossaries for three different languages. 1 Introduction Much textual content, such as that available on the Web, contains a great deal of information focused on specific areas of knowledge. However, it is not infrequent that, when reading a domainspecific text, we humans do not know the meaning of one or more terms. To help the human understanding of specialized texts, repositories of textual definitions for technical terms, called glossaries, are compiled as reference resources within each domain of interest. Interestingly, electronic glossaries have been shown to be key resources not only for humans, but also in Natural Language Processing (NLP) tasks such as Question Answering (Cui et al., 2007), Word Sense Disambiguation (Duan and Yates, 2010; Faralli and Navigli, 2012) and ontology learning (Navigli et al., 2011; Velardi et al., 2013). Today large numbers of glossaries are available on the Web. However most such glossaries are small-scale, being made up of just some hundreds of definitions. Consequently, individual glossaries typically provide a partial view of a given domain. Moreover, there is no easy way of retrieving the subset of Web glossaries which appertains to a domain of interest. Although online services such as Google Define allow the user to retrieve definitions for an input term, such definitions are extracted from Web glossaries and put together for the given term regardless of their domain. As a result, gathering a large-scale, full-fledged domain glossary is not a speedy operation. Collaborative efforts are currently producing large-scale encyclopedias, such as Wikipedia, which are proving very useful in NLP (Hovy et al., 2013). Interestingly, wikipedias also include manually compiled glossaries. However, such glossaries still suffer from the same above-mentioned problems, i.e., being incomplete or over-specific,1 and hard to customize according to a user’s needs. To automatically obtain large domain glossaries, over recent years computational approaches have been developed which extract textual definitions from corpora (Navigli and Velardi, 2010; Reiplinger et al., 2012) or the Web (Fujii and Ishikawa, 2000). The former methods start from a given set of terms (possibly automatically extracted from a domain corpus) and then harvest textual definitions for these terms from the input corpus using a supervised system. Webbased methods, instead, extract text snippets from Web pages which match pre-defined lexical patterns, such as “X is a Y”, along the lines of Hearst (1992). These approaches typically perform with high precision and low recall, because they fall short of detecting the high variability of the syntactic structure of textual definitions. To address the low-recall issue, recurring cue terms occurring within dictionary and encyclopedic resources can be automatically extracted and incorporated into lexical patterns (Saggion, 2004). However, this approach is term-specific and does not scale to arbitrary terminologies and domains. In this paper we propose GlossBoot, a novel approach which reduces human intervention to a bare minimum and exploits the Web to learn a 1http://en.wikipedia.org/wiki/Portal:Contents/Glossaries 528 Pattern and glossary extraction Initial seed selection Gloss ranking and filtering Seed queries Seed selection initial seeds new seeds search results domain glossary Gk final glossary 1 2 3 4 5 Figure 1: The GlossBoot bootstrapping process for glossary learning. full-fledged domain glossary. Given a domain and a language of interest, we bootstrap the glossary learning process with just a few hypernymy relations (such as computer is-a device), with the only condition that the (term, hypernym) pairs must be specific enough to implicitly identify the domain in the target language. Hence we drop the requirement of a large domain corpus, and also avoid the use of training data or a manually defined set of lexical patterns. To the best of our knowledge, this is the first approach which jointly acquires large amounts of terms and glosses from the Web with minimal supervision for any target domain and language. 2 GlossBoot Our objective is to harvest a domain glossary G containing pairs of terms/glosses in a given language. To this end, we automatically populate a set of HTML patterns P which we use to extract definitions from Web glossaries. Initially, both P := ∅and G := ∅. We incrementally populate the two sets by means of an initial seed selection step and four iterative steps (cf. Figure 1): Step 1. Initial seed selection: first, we manually select a set of K hypernymy relation seeds S = {(t1, h1), . . . , (tK, hK)}, where the pair (ti, hi) contains a term ti and its generalization hi (e.g., (firewall, security system)). This is the only human input to the entire glossary learning process. The selection of the input seeds plays a key role in the bootstrapping process, in that the pattern and gloss extraction process will be driven by these seeds. The chosen hypernymy relations thus have to be as topical and representative as possible for the domain of interest (e.g., (compiler, computer program) is an appropriate pair for computer science, while (byte, unit of measurement) is not, as it might cause the extraction of several glossaries of units and measures). We now set the iteration counter k to 1 and start the first iteration of the glossary bootstrapping process (steps 2-5). After each iteration k, we keep track of the set of glosses Gk, acquired during iteration k. Step 2. Seed queries: for each seed pair (ti, hi), we submit the following query to a Web search engine: “ti” “hi” glossaryKeyword2 (where glossaryKeyword is the term in the target language referring to glossary (i.e., glossary for English, glossaire for French etc.)) and collect the top-ranking results for each query.3 Each resulting page is a candidate glossary for the domain implicitly identified by our relation seeds S. Step 3. Pattern and glossary extraction: we initialize the glossary for iteration k as follows: Gk := ∅. Next, from each resulting page, we harvest all the text snippets s starting with ti and ending with hi (e.g., “firewall</b> – a <i>security system” where ti = firewall and hi = security system), i.e., s = ti . . . hi. For each such text snippet s, we perform the following substeps: (a) extraction of the term/gloss separator: we start from ti and move right until we extract the longest sequence pM of HTML tags and non-alphanumeric characters, which we call the term/gloss separator, between ti and the glossary definition (e.g., “</b> -” between “firewall” and “a” in the above example). (b) gloss extraction: we expand the snippet s to the right of hi in search of the entire gloss of ti, i.e., until we reach a block element (e.g., <span>, <p>, <div>), while ignoring formatting elements such as <b>, <i> and <a> which are typically included within a definition sentence. As a result, we obtain the sequence ti pM glosss(ti) pR, where glosss(ti) is our gloss for seed term ti in snippet s (which includes hi by construction) and pR is the HTML block element 2In what follows we use the typewriter font for keywords and term/gloss separators. 3We use the Google Ajax APIs, which return the 64 topranking search results. 529 Generalized pattern HTML text snippet <strong> * </strong> - * </span> <strong>Interrupt</strong> - The suspension of normal program execution to perform a higher priority service routine as requested by a peripheral device. </span> <dt> * </dt><dd> * </dd> <dt>Netiquette</dt><dd>The established conventions of online politeness are called netiquette.</dd> <h3> * </h3><p> * </p> <h3>Compiler</h3><p>A program that translates source code, such as C++ or Pascal, into directly executable machine code.</p> <span> * </span> - * </p> <span>Signature</span> - A function’s name and parameter list. </p> <span> * </span>: * <span> <span>Blog</span>: Short for “web log”, a blog is an online journal. <span> Table 1: Examples of generalized patterns together with matching HTML text snippets. Figure 2: An example of decomposition during pattern extraction for a snippet matching the seed pair (firewall, security system). to the right of the extracted gloss. In Figure 2 we show the decomposition of our example snippet matching the seed (firewall, security system). (c) pattern instance extraction: we extract the following pattern instance: pL ti pM glosss(ti) pR, where pL is the longest sequence of HTML tags and non-alphanumeric characters obtained when moving to the left of ti (see Figure 2). (d) pattern extraction: we generalize the above pattern instance to the following pattern: pL ∗pM ∗pR, i.e., we replace ti and glosss(ti) with *. For the above example, we obtain the following pattern: <p><b> * </b> - * </p>. Finally, we add the generalized pattern to the set of patterns P, i.e., P := P ∪{pL ∗pM ∗pR}. We also add the first sentence of the retrieved gloss glosss(ti) to our glossary Gk, i.e., Gk := Gk ∪ {(ti, first(glosss(ti)))}, where first(g) returns the first sentence of gloss g. (e) pattern matching: finally, we look for additional pairs of terms/glosses in the Web page containing the snippet s by matching the page against the generalized pattern pL ∗pM ∗pR. We then add to Gk the new (term, gloss) pairs matching the generalized pattern. In Table 1 we show some nontrivial generalized patterns together with matching HTML text snippets. As a result of step 3, we obtain a glossary Gk for the terms discovered at iteration k. Step 4. Gloss ranking and filtering: importantly, not all the extracted definitions pertain to the domain of interest. In order to rank the glosses obtained at iteration k by domain pertinence, we assume that the terms acquired at previous iterations belong to the target domain, i.e., they are domain terms at iteration k. Formally, we define the terminology T k−1 1 of the domain terms accumulated up until iteration k −1 as follows: T k−1 1 := Sk−1 i=1 T i, where T i := {t : ∃(t, g) ∈Gi}. For the base step k = 1, we define T 0 1 := {t : ∃(t, g) ∈ G1}, i.e., we use the first-iteration terminology itself. To rank the glosses, we first transform each acquired gloss g to its bag-of-word representation Bag(g), which contains all the single- and multiword expressions in g. We use the lexicon of the target language’s Wikipedia together with T k−1 1 in order to obtain the bag of content words.4 Then we 4In fact Wikipedia is only utilized in the multi-word identification phase. We do not use Wikipedia for discovering new terms. 530 Term Gloss Hypernym # Seeds Score dynamic packet filter A firewall facility that can monitor the state of active connections and use this information to determine which network packets to allow through the firewall firewall 2 0.75 die An integrated circuit chip cut from a finished wafer. integrated circuit 1 0.75 constructor a method used to help create a new object and initialise its data method 0 1.00 Table 2: Examples of extracted terms, glosses and hypernyms (seeds are in bold, domain terms, i.e., in T k−1 1 , are underlined, non-domain terms in italics). calculate the domain score of a gloss g as follows: score(g) = |Bag(g) ∩T k−1 1 | |Bag(g)| . (1) Finally, we use a threshold θ (whose tuning is described in the experimental section) to remove from Gk those glosses g whose score(g) < θ. In Table 2 we show some glosses in the computer science domain (second column, domain terms are underlined) together with their scores (last column). Step 5. Seed selection for next iteration: we now aim at selecting the new set of hypernymy relation seeds to be used to start the next iteration. We perform three substeps: (a) Hypernym extraction: for each newlyacquired term/gloss pair (t, g) ∈Gk, we automatically extract a candidate hypernym h from the textual gloss g. To do this we use a simple unsupervised heuristic which just selects the first term in the gloss.5 We show an example of hypernym extraction for some terms in Table 2 (we report the term in column 1, the gloss in column 2 and the hypernyms extracted by the first term hypernym extraction heuristic in column 3). (b) (Term, Hypernym)-ranking: we sort all the glosses in Gk by the number of seed terms found in each gloss. In the case of ties (i.e., glosses with the same number of seed terms), we further sort the glosses by the score given in Formula 1. We show an example of rank for some glosses in Table 2, where seed terms are in bold, domain terms (i.e., in T k−1 1 ) are underlined, and non-domain terms are shown in italics. 5While more complex strategies could be used, such as supervised classifiers (Navigli and Velardi, 2010), we found that this heuristic works well because, even when it is not a hypernym, the first term plays the role of a cue word for the defined term. (c) New seed selection: we select the (term, hypernym) pairs corresponding to the K top-ranking glosses. Finally, if k equals the maximum number of iterations, we stop. Else, we increment the iteration counter (i.e., k := k + 1) and jump to step (2) of our glossary bootstrapping algorithm after replacing S with the new set of seeds. The output of glossary bootstrapping is a domain glossary G := S i=1,...,max Gi, which includes a domain terminology T := {t : ∃(t, g) ∈G} (i.e., T := T max 1 ) and a set of glosses glosses(t) for each term t ∈T (i.e., glosses(t) := {g : ∃(t, g) ∈G}). 3 Experimental Setup 3.1 Domains and Gold Standards For our experiments we focused on four different domains, namely, Computing, Botany, Environment, and Finance, and on three languages, namely, English, French and Italian. Note that not all the four domains are clear-cut. For instance, the Environment domain is quite interdisciplinary, including terms from fields such as Chemistry, Biology, Law, Politics, etc. For each domain and language we selected as gold standards well-reputed glossaries on the Web, such as: the Utah computing glossary,6 the Wikipedia glossary of botanical terms,7 a set of Wikipedia glossaries about environment,8 and the Reuters glossary for Finance9 (full list at http://lcl.uniroma1.it/ glossboot/). We report the size of the four gold-standard datasets in Table 4. 6http://www.math.utah.edu/∼wisnia/glossary.html 7http://en.wikipedia.org/wiki/Glossary of botanical terms 8http://en.wikipedia.org/wiki/List of environmental issues, http://en.wikipedia.org/wiki/Glossary of environmental science, http://en.wikipedia.org/wiki/Glossary of climate change 9http://glossary.reuters.com/index.php/Main Page 531 Computing Botany Environment Finance chip circuit leaf organ sewage waste eurobond bond destructor method grass plant acid rain rain asset play stock compiler program cultivar variety ecosystem system income stock security scanner device gymnosperm plant air monitoring sampling financial intermediary institution firewall security system flower reproductive organ global warming temperature derivative financial product Table 3: Hypernymy relation seeds used to bootstrap glossary learning in the four domains for the English language. 3.2 Seed Selection For each domain and language we manually selected five seed hypernymy relations, shown for the English language in Table 3. The seeds were selected by the authors on the basis of just two conditions: i) the seeds should cover different aspects of the domain and should, indeed, identify the domain implicitly, ii) at least 10,000 results should be returned by the search engine when querying it with the seeds plus the glossaryKeyword (see step (2) of GlossBoot). The seed selection was not fine-tuned (i.e., it was not adjusted to improve performance), so it might well be that better seeds would provide better results (see, e.g., (Kozareva and Hovy, 2010b)). However, this type of consideration is beyond the scope of this paper. 3.2.1 Evaluation measures We performed experiments to evaluate the quality of both terms and glosses, as jointly extracted by GlossBoot. Terms. For each domain and language we calculated coverage, extra-coverage and precision of the acquired terms T. Coverage is the ratio of extracted terms in T also contained in the gold standard ˆT to the size of ˆT. Extra-coverage is calculated as the ratio of the additional extracted terms in T \ ˆT over the number of gold standard terms ˆT. Finally, precision is the ratio of extracted terms in T deemed to be within the domain. To calculate precision we randomly sampled 5% of the retrieved terms and asked two human annotators to manually tag their domain pertinence (with adjudication in case of disagreement; κ = .62, indicating substantial agreement). Note that by sampling on the entire set T, we calculate the precision of both terms in T ∩ˆT, i.e., in the gold standard, and terms in T \ ˆT, i.e., not in the gold standard, which are not necessarily outside the domain. Glosses. We calculated the precision of the extracted glosses as the ratio of glosses which were both well-formed textual definitions and specific Botany Comput. Environ. Finance EN Gold std. terms 772 421 713 1777 GlossBoot terms 5598 3738 4120 5294 glosses 11663 4245 5127 6703 FR Gold std. terms 662 278 117 109 GlossBoot terms 3450 3462 1941 1486 glosses 5649 3812 2095 1692 IT Gold std. terms 205 244 450 441 GlossBoot terms 1965 3356 1630 3601 glosses 2678 5891 1759 5276 Table 4: Size of the gold-standard and automatically-acquired glossaries for the four domains in the three languages of interest. to the target domain. Precision was determined on a random sample of 5% of the acquired glosses for each domain and language. The annotation was made by two annotators, with κ = .675, indicating substantial agreement. 3.3 Parameter tuning We tuned the minimum and maximum length of both pL and pR (see step (3) of GlossBoot) and the threshold θ that we use to filter out non-domain glosses (see step (4) of GlossBoot) using an extra domain, i.e., the Arts domain. To do this, we created a development dataset made up of the full set of 394 terms from the Tate Gallery glossary,10 and bootstrapped our glossary extraction method with just one seed, i.e., (fresco, painting). We chose an optimal value of θ = 0.1 on the basis of a harmonic mean of coverage and precision. Note that, since precision also concerns terms not in the gold standard, we had to manually validate a sample of the extracted terms for each of the 21 tested values of θ ∈{0, 0.05, 0.1, . . . , 1.0}. 4 Results and Discussion 4.1 Terms The size of the extracted terminologies for the four domains after five iterations are reported in Table 4. In Table 5 we show examples of the possible scenarios for terms: in-domain extracted terms 10http://www.tate.org.uk/collections/glossary/ 532 In-domain In-domain Out-of-domain In-domain (in gold std, ∈ˆT ∩T) (not in gold std, ∈T \ ˆT) (not in gold std, ∈T \ ˆT) (missed, ∈ˆT \ T) Computing software, inheritance, microprocessor clipboard, even parity, sudoer gs1-128 label, grayscale, quantum dots openwindows, sun microsystems, hardwired Botany pollinium, stigma, spore vegetation, dichogamous, fertilisation ion, free radicals, manamana nomenclature, endemism, insectivorous Environment carcinogen, footprint, solar power frigid soil, biosafety, fire simulator epidermis, science park, alum g8, best practice, polystyrene Finance cash, bond, portfolio trustor, naked option, market price precedent, immigration, heavy industry co-location, petrodollars, euronext Table 5: Examples of extracted (and missed) terms. Botany Comput. Environ. Finance EN Precision 95% 98% 94% 98% Coverage 85% 40% 35% 32% Extra-coverage 640% 848% 542% 266% FR Precision 80% 97% 83% 98% Coverage 97% 27% 14% 26% Extra-coverage 425% 1219% 1646% 1350% IT Precision 89% 98% 76% 99% Coverage 42% 27% 11% 73% Extra-coverage 511% 1349% 356% 746% Table 6: Precision, coverage and extra-coverage of the term extraction phase after 5 iterations. which are also found in the gold standard (column 2), in-domain extracted terms but not in the gold standard (column 3), out-of-domain extracted terms (column 4), and domain terms in the gold standard but not extracted by our approach (column 5). A quantitative evaluation is provided in Table 6, which shows the percentage results in terms of precision, coverage, and extra-coverage after 5 iterations of GlossBoot. For the English language we observe good coverage (between 32% and 40% on three domains, with a high peak of 85% coverage on Botany) and generally very high precision values. Moreover for the French and the Italian languages we observe a peak in the Botany and Finance domains respectively, while the lowest performances in terms of precision and coverage are observed for Environment, i.e., the most interdisciplinary domain. In all three languages GlossBoot provides very high extra coverage of domain terms, i.e., additional terms which are not in the gold standard but are returned by our system. The figures, shown in Table 6, range between 266% (4726/1777) for the English Finance domain and 1646% (1926/117) for the French Environment domain. These results, together with the generally high precision values, indicate the larger extent of our bootstrapped glossaries compared to our gold standards. Botany Computing Environm. Finance Min Max Min Max Min Max Min Max 26% 68% 8% 39% 5% 33% 14% 30% Table 7: Coverage ranges for single-seed term extraction for the English language. Number of seeds. Although the choice of selecting five hypernymy relation seeds is quite arbitrary, it shows that we can acquire a reliable terminology with minimal human intervention. Now, an obvious question arises: what if we bootstrapped GlossBoot with fewer hypernym seeds, e.g., just one seed? To answer this question we replicated our English experiments on each single (term, hypernym) pair in our seed set. In Table 7 we show the coverage ranges – i.e., the minimum and maximum coverage values – for the five seeds on each domain. We observe that the maximum coverage can attain values very close to those obtained with five seeds. However, the minimum coverage values are much lower. So, if we adopt a 1-seed bootstrapping policy there is a high risk of acquiring a poorer terminology unless we select the single seed very carefully, whereas we have shown that just a few seeds can cope with domain variability. Similar considerations can be made regarding different seed set sizes (we also tried 2, 3 and 4). So five is not a magic number, just one which can guarantee an adequate coverage of the domain. Number of iterations. In order to study the coverage trend over iterations we selected 5 seeds for our tuning domain (i.e., Arts, see Section 3.3). Figure 3 shows the size (left graph), coverage, extra-coverage and precision (middle graph) of the acquired glossary after each iteration, from 1 to 20. As expected, (extra-)coverage grows over iterations, while precision drops. Stopping at iteration 5, as we do, is optimal in terms of the harmonic mean of precision and coverage (right graph in Figure 3). 533 1000 2000 3000 4000 5000 6000 7000 2 4 6 8 10 12 14 16 18 20 iteration Number of terms and glosses extracted over iterations terms glosses 10% 100% 1000% 2 4 6 8 10 12 14 16 18 20 iteration Coverage, extra-coverage and precision over iterations precision coverage extra-coverage 30% 32% 34% 36% 38% 40% 2 4 6 8 10 12 14 16 18 20 iteration Harmonic mean of precision and coverage over iterations harmonic mean of precision and coverage Figure 3: Size, coverage and precision trends for Arts (tuning domain) over 20 iterations for English. Botany Comput. Environm. Finance EN 96% 94% 97% 97% FR 88% 89% 88% 95% IT 94% 98% 83% 99% Table 8: Precision of the glosses for the four domains and for the three languages. 4.2 Glosses We show the results of gloss evaluation in Table 8. Precision ranges between 83% and 99%, with three domains performing above 92% on average across languages, and the Environment domain performing relatively worse because of its highly interdisciplinary nature (89% on average). We observe that these results are strongly correlated with the precision of the extracted terms (cf. Table 6), because the retrieved glosses of domain terms are usually in-domain too, and follow a definitional style because they come from glossaries. Note, however, that the gloss precision can also be higher than term precision, because many pertinent glosses might be extracted for the same term, cf. Table 4. 5 Comparative Evaluation 5.1 Comparison with Google Define We performed a comparison with Google Define,11 a state-of-the-art definition search service. This service inputs a term query and outputs a list of glosses. First, we randomly sampled 100 terms from our gold standard for each domain and each of the three languages. Next, for each domain and language, we manually calculated the fraction of terms for which an in-domain definition was provided by Google Define and GlossBoot. Table 9 shows the coverage results. Google Define outperforms our system on all four domains (with a few exceptions). However 11Accessible from Google search by means of the define: keyword. Botany Comput. Environm. Finance EN Google Define 90% 87% 84% 82% GlossBoot 77% 47% 44% 51% FR Google Define 40% 48% 36% 82% GlossBoot 88% 42% 22% 32% IT Google Define 52% 74% 78% 80% GlossBoot 64% 38% 44% 92% Table 9: Number of domain glosses (from a random sample of 100 gold standard terms per domain) retrieved using Google Define and GlossBoot. we note that Google Define: i) requires knowing the domain term to be defined in advance, whereas we jointly acquire thousands of terms and glosses starting from just a few seeds; ii) does not discriminate between glosses pertaining to the target domain and glosses concerning other fields or senses, whereas we extract domain-specific glosses. 5.2 Comparison with TaxoLearn We also compared GlossBoot with a recent approach to glossary learning embedded into a framework for graph-based taxonomy learning from scratch, called TaxoLearn (Navigli et al., 2011). Since this approach requires the manual selection of a domain corpus to automatically extract terms and glosses, we decided to keep a level playing field and experimented with the same domain used by the authors, i.e., Artificial Intelligence (AI). TaxoLearn was applied to the entire set of IJCAI 2009 proceedings, resulting in the extraction of 427 terms and 834 glosses.12 As regards GlossBoot, we selected 10 seeds to cover all the fields of AI, obtaining 5827 terms and 6716 glosses after 5 iterations, one order of magnitude greater than TaxoLearn. As for the precision of the extracted terms, we randomly sampled 50% of them for each system. We show in Table 10 (first row) the estimated term 12Available at: http://lcl.uniroma1.it/taxolearn 534 GlossBoot TaxoLearn Term Precision 82.3% (2398/2913) 77.0% (164/213) Gloss Precision 82.8% (2780/3358) 78.9% (329/417) Table 10: Estimated term and gloss precision of GlossBoot and TaxoLearn for the Artificial Intelligence domain. precision for GlossBoot and TaxoLearn. The precision value for GlossBoot is lower than the precision values of the four domains in Table 6, due to the AI domain being highly interdisciplinary. TaxoLearn obtained a lower precision because it acquires a full-fledged taxonomy for the domain, thus also including higher-level concepts which do not necessarily pertain to the domain. We performed a similar evaluation for the precision of the acquired glosses, by randomly sampling 50% of them for each system. We show in Table 10 (second row) the estimated gloss precision of GlossBoot and TaxoLearn. Again, GlossBoot outperforms TaxoLearn, retrieving a larger amount of glosses (6716 vs. 834) with higher precision. We remark, however, that in TaxoLearn glossary extraction is a by-product of the taxonomy learning process. Finally, we note that we cannot compare with approaches based on lexical patterns (such as (Kozareva and Hovy, 2010a)), because they are not aimed at learning glossaries, but just at retrieving sentence snippets which contain pairs of terms/hypernyms (e.g., “supervised systems such as decision trees”). 6 Related Work There are several techniques in the literature for the automated acquisition of definitional knowledge. Fujii and Ishikawa (2000) use an n-gram model to determine the definitional nature of text fragments, whereas Klavans and Muresan (2001) apply pattern matching techniques at the lexical level guided by cue phrases such as “is called” and “is defined as”. Cafarella et al. (2005) developed a Web search engine which handles more general and complex patterns like “cities such as ProperNoun(Head(NP))” in which it is possible to constrain the results with syntactic properties. More recently, a domain-independent supervised approach was presented which learns WordClass Lattices (WCLs), i.e. lattice-based definition classifiers that are applied to candidate sentences containing the input terms (Navigli and Velardi, 2010). WCLs have been shown to perform with high precision in several domains (Velardi et al., 2013). To avoid the burden of manually creating a training dataset, definitional patterns can be extracted automatically. Reiplinger et al. (2012) experimented with two different approaches for the acquisition of lexical-syntactic patterns. The first approach involves bootstrapping patterns from a domain corpus, and then manually refining the acquired patterns. The second approach, instead, involves automatically acquiring definitional sentences by using a more sophisticated syntactic and semantic processing. The results shows high precision in both cases. However, these approaches to glossary learning extract unrestricted textual definitions from open text. In order to filter out non-domain definitions, Velardi et al. (2008) automatically extract a domain terminology from an input corpus which they later use for assigning a domain score to each harvested definition and filtering out non-domain candidates. The extraction of domain terms from corpora can be performed either by means of statistical measures such as specificity and cohesion (Park et al., 2002), or just TF*IDF (Kim et al., 2009). To avoid the use of a large domain corpus, terminologies can be obtained from the Web by using Doubly-Anchored Patterns (DAPs) which, given a (term, hypernym) pair, harvest sentences matching manually-defined patterns like “<hypernym> such as <term>, and *” (Kozareva et al., 2008). Kozareva and Hovy (2010a) further extend this term extraction process by harvesting new hypernyms using the corresponding inverse patterns (called DAP−1) like “* such as <term1>, and <term2>”. Similarly to our approach, they drop the requirement of a domain corpus and start from a small number of (term, hypernym) seeds. However, while Doubly-Anchored Patterns have proven useful in the induction of domain taxonomies (Kozareva and Hovy, 2010a), they cannot be applied to the glossary learning task, because the extracted sentences are not formal definitions. In contrast, GlossBoot performs the novel task of multilingual glossary learning from the Web by bootstrapping the extraction process with a few (term, hypernym) seeds. Bootstrapping techniques (Brin, 1998; Agichtein and Gravano, 2000; Pas¸ca et al., 2006) have been successfully applied to several tasks, including high-precision semantic lexicon extraction from large corpora (Riloff and Jones, 1999; Thelen and Riloff, 2002; McIntosh 535 Domain Term Gloss EN Botany deciduous losing foliage at the end of the growing season. Computing information space The abstract concept of everything accessible using networks: the Web. Finance discount The difference between the lower price paid for a security and the security’s face amount at issue. FR Botany insectivore Qui capture des insectes et en absorbe les mati`eres nutritives. Computing notebook C’est l’appellation d’un petit portable d’une taille proche d’une feuille A4. Environment ´ecosyst`eme Ensemble des ˆetres vivants et des ´el´ements non vivants d’un milieu qui sont li´es vitalement entre eux. IT Computing link Collegamento tra diverse pagine web, pu`o essere costituito da immagini o testo. Environment effetto serra Riscaldamento dell’atmosfera terrestre dovuto alla presenza di gas nell’atmosfera (anidride carbonica, metano e vapore acqueo) che ostacolano l’uscita delle radiazioni infrarosse emesse dal suolo terreste verso l’alto. Finance spread Indica la differenza tra la quotazione di acquisto e quella di vendita. Table 11: An excerpt of the domain glossaries acquired for the three languages. and Curran, 2008; McIntosh and Curran, 2009), learning semantic relations (Pantel and Pennacchiotti, 2006), extracting surface text patterns for open-domain question answering (Ravichandran and Hovy, 2002), semantic tagging (Huang and Riloff, 2010) and unsupervised Word Sense Disambiguation (Yarowsky, 1995). By exploiting the (term, hypernym) seeds to bootstrap the iterative acquisition of extraction patterns from Web glossary pages, we can cover the high variability of textual definitions, including both sentences matching the above-mentioned lexico-syntactic patterns (e.g., “a corpus is a collection of documents”) and glossary-style definitions (e.g., “corpus: a collection of document”) independently of the target domain and language. 7 Conclusions In this paper we have presented GlossBoot, a new, minimally-supervised approach to multilingual glossary learning. Starting from a few hypernymy relation seeds which implicitly identify the domain of interest, we apply a bootstrapping approach which iteratively obtains HTML patterns from Web glossaries and then applies them to the extraction of term/gloss pairs. To our knowledge, GlossBoot is the first approach to large-scale glossary learning which jointly acquires thousands of terms and glosses for a target domain and language with minimal supervision. The gist of GlossBoot is our glossary bootstrapping approach, thanks to which we can drop the requirements of existing techniques such as the availability of domain text corpora, which often do not contain enough definitions, and the manual specification of lexical patterns, which typically extract sentence snippets, instead of formal glosses. GlossBoot will be made available to the research community as open-source software. Beyond the immediate usability of its output and its effective use for domain Word Sense Disambiguation (Faralli and Navigli, 2012), we wish to show the benefit of GlossBoot in gloss-driven approaches to ontology learning (Navigli et al., 2011; Velardi et al., 2013) and semantic network enrichment (Navigli and Ponzetto, 2012). In Table 11 we show an excerpt of the acquired glossaries. All the glossaries and gold standards created for our experiments are available from the authors’ Web site http://lcl.uniroma1.it/ glossboot/. We remark that the terminologies covered with GlossBoot are not only precise, but also one order of magnitude greater than those covered in individual online glossaries. As future work we plan to study the ability of GlossBoot to acquire domain glossaries at different levels of specificity (i.e., domains vs. subdomains). We also plan to exploit the acquired HTML patterns for implementing an open-source glossary crawler, along the lines of Google Define. Acknowledgments The authors gratefully acknowledge the support of the ERC Starting Grant MultiJEDI No. 259234. 536 References Eugene Agichtein and Luis Gravano. 2000. Snowball: extracting relations from large plain-text collections. In Proceedings of the 5th ACM conference on Digital Libraries, pages 85–94, San Antonio, Texas, USA. Sergey Brin. 1998. Extracting patterns and relations from the World Wide Web. In Proceedings of the International Workshop on The World Wide Web and Databases, pages 172–183, London, UK. Michael J. Cafarella, Doug Downey, Stephen Soderland, and Oren Etzioni. 2005. KnowItNow: Fast, scalable information extraction from the web. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 563–570, Vancouver, British Columbia, Canada. Hang Cui, Min-Yen Kan, and Tat-Seng Chua. 2007. Soft pattern matching models for definitional question answering. ACM Transactions on Information Systems, 25(2):1–30. Weisi Duan and Alexander Yates. 2010. Extracting glosses to disambiguate word senses. In Proceedings of Human Language Technologies: The 11th Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 627–635, Los Angeles, CA, USA. Stefano Faralli and Roberto Navigli. 2012. A New Minimally-supervised Framework for Domain Word Sense Disambiguation. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1411– 1422, Jeju, Korea. Atsushi Fujii and Tetsuya Ishikawa. 2000. Utilizing the World Wide Web as an encyclopedia: extracting term descriptions from semi-structured texts. In Proceedings of the 38th Annual Meeting on Association for Computational Linguistics, pages 488–495, Hong Kong. Marti A. Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In Proceedings of the 15th International Conference on Computational Linguistics, pages 539–545, Nantes, France. Eduard H. Hovy, Roberto Navigli, and Simone Paolo Ponzetto. 2013. Collaboratively built semistructured content and Artificial Intelligence: The story so far. Artificial Intelligence, 194:2–27. Ruihong Huang and Ellen Riloff. 2010. Inducing domain-specific semantic class taggers from (almost) nothing. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 275–285, Uppsala, Sweden. Su Nam Kim, Timothy Baldwin, and Min-Yen Kan. 2009. An unsupervised approach to domain-specific term extraction. In Proceedings of the Australasian Language Technology Workshop, pages 94–98, Sydney, Australia. Judith Klavans and Smaranda Muresan. 2001. Evaluation of the DEFINDER system for fully automatic glossary construction. In Proceedings of the American Medical Informatics Association (AMIA) Symposium, pages 324–328, Washington, D.C., USA. Zornitsa Kozareva and Eduard Hovy. 2010a. A semi-supervised method to learn and construct taxonomies using the Web. In Proceedings of Empirical Methods in Natural Language Processing, pages 1110–1118, Cambridge, MA, USA. Zornitsa Kozareva and Eduard H. Hovy. 2010b. Not all seeds are equal: Measuring the quality of text mining seeds. In Proceedings of Human Language Technologies: The 11th Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 618–626, Los Angeles, California, USA. Zornitsa Kozareva, Ellen Riloff, and Eduard Hovy. 2008. Semantic class learning from the Web with hyponym pattern linkage graphs. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics, pages 1048–1056, Columbus, Ohio, USA. Tara McIntosh and James R. Curran. 2008. Weighted mutual exclusion bootstrapping for domain independent lexicon and template acquisition. In Proceedings of the Australasian Language Technology Association Workshop, pages 97–105, CSIRO ICT Centre, Tasmania. Tara McIntosh and James R. Curran. 2009. Reducing semantic drift with bagging and distributional similarity. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 396–404, Suntec, Singapore. Roberto Navigli and Simone Paolo Ponzetto. 2012. BabelNet: The automatic construction, evaluation and application of a wide-coverage multilingual semantic network. Artificial Intelligence, 193:217– 250. Roberto Navigli and Paola Velardi. 2010. Learning Word-Class Lattices for definition and hypernym extraction. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1318–1327, Uppsala, Sweden. Roberto Navigli, Paola Velardi, and Stefano Faralli. 2011. A graph-based algorithm for inducing lexical taxonomies from scratch. In Proceedings of the 22th International Joint Conference on Artificial Intelligence, pages 1872–1877, Barcelona, Spain. 537 Marius Pas¸ca, Dekang Lin, Jeffrey Bigham, Andrei Lifchits, and Alpa Jain. 2006. Names and similarities on the web: Fact extraction in the fast lane. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 809–816, Sydney, Australia. Patrick Pantel and Marco Pennacchiotti. 2006. Espresso: Leveraging Generic Patterns for Automatically Harvesting Semantic Relations. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics (COLING-ACL), Sydney, Australia, pages 113–120, Sydney, Australia. Youngja Park, Roy J. Byrd, and Branimir K. Boguraev. 2002. Automatic glossary extraction: beyond terminology identification. In Proceedings of the 19th International Conference on Computational Linguistics, pages 1–7, Taipei, Taiwan. Deepak Ravichandran and Eduard Hovy. 2002. Learning surface text patterns for a question answering system. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 41–47, Philadelphia, Pennsylvania. Melanie Reiplinger, Ulrich Sch¨afer, and Magdalena Wolska. 2012. Extracting glossary sentences from scholarly articles: A comparative evaluation of pattern bootstrapping and deep analysis. In Proceedings of the ACL-2012 Special Workshop on Rediscovering 50 Years of Discoveries, pages 55–65, Jeju Island, Korea. Ellen Riloff and Rosie Jones. 1999. Learning dictionaries for information extraction by multi-level bootstrapping. In Proceedings of the sixteenth national conference on Artificial intelligence and the eleventh Innovative applications of artificial intelligence conference, pages 474–479, Menlo Park, CA, USA. Horacio Saggion. 2004. Identifying definitions in text collections for question answering. In Proceedings of the Fourth International Conference on Language Resources and Evaluation, pages 1927–1930, Lisbon, Portugal. Michael Thelen and Ellen Riloff. 2002. A bootstrapping method for learning semantic lexicons using extraction pattern contexts. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 214–221, Salt Lake City, UT, USA. Paola Velardi, Roberto Navigli, and Pierluigi D’Amadio. 2008. Mining the Web to create specialized glossaries. IEEE Intelligent Systems, 23(5):18–25. Paola Velardi, Stefano Faralli, and Roberto Navigli. 2013. OntoLearn Reloaded: A graph-based algorithm for taxonomy induction. Computational Linguistics, 39(3). David Yarowsky. 1995. Unsupervised Word Sense Disambiguation rivaling supervised methods. In Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics, pages 189– 196, Cambridge, MA, USA. 538
2013
52
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 539–549, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Collective Annotation of Linguistic Resources: Basic Principles and a Formal Model Ulle Endriss and Raquel Fern´andez Institute for Logic, Language & Computation University of Amsterdam {ulle.endriss|raquel.fernandez}@uva.nl Abstract Crowdsourcing, which offers new ways of cheaply and quickly gathering large amounts of information contributed by volunteers online, has revolutionised the collection of labelled data. Yet, to create annotated linguistic resources from this data, we face the challenge of having to combine the judgements of a potentially large group of annotators. In this paper we investigate how to aggregate individual annotations into a single collective annotation, taking inspiration from the field of social choice theory. We formulate a general formal model for collective annotation and propose several aggregation methods that go beyond the commonly used majority rule. We test some of our methods on data from a crowdsourcing experiment on textual entailment annotation. 1 Introduction In recent years, the possibility to undertake largescale annotation projects with hundreds or thousands of annotators has become a reality thanks to online crowdsourcing methods such as Amazon’s Mechanical Turk and Games with a Purpose. Although these techniques open the door to a true revolution for the creation of annotated corpora, within the computational linguistics community there so far is no clear understanding of how the so-called “wisdom of the crowds” could or should be used to develop useful annotated linguistic resources. Those who have looked into this increasingly important issue have mostly concentrated on validating the quality of multiple non-expert annotations in terms of how they compare to expert gold standards; but they have only used simple aggregation methods based on majority voting to combine the judgments of individual annotators (Snow et al., 2008; Venhuizen et al., 2013). In this paper, we take a different perspective and instead focus on investigating different aggregation methods for deriving a single collective annotation from a diverse set of judgments. For this we draw inspiration from the field of social choice theory, a theoretical framework for combining the preferences or choices of several individuals into a collective decision (Arrow et al., 2002). Our aim is to explore the parallels between the task of aggregating the preferences of the citizens participating in an election and the task of combining the expertise of speakers taking part in an annotation project. Our contribution consists in the formulation of a general formal model for collective annotation and, in particular, the introduction of several families of aggregation methods that go beyond the commonly used majority rule. The remainder of this paper is organised as follows. In Section 2 we introduce some basic terminology and argue that there are four natural forms of collective annotation. We then focus on one of them and present a formal model for it in Section 3. We also formulate some basic principles of aggregation within this model in the same section. Section 4 introduces three families of aggregation methods: bias-correcting majority rules, greedy methods for identifying (near-)consensual coalitions of annotators, and distance-based aggregators. We test the former two families of aggregators, as well as the simple majority rule commonly used in similar studies, in a case study on data extracted from a crowdsourcing experiment on textual entailment in Section 5. Section 6 discusses related work and Section 7 concludes. 2 Four Types of Collective Annotation An annotation task consists of a set of items, each of which is associated with a set of possible categories (Artstein and Poesio, 2008). The categories may be the same for all items or they may be itemspecific. For instance, dialogue act annotation 539 (Allen and Core, 1997; Carletta et al., 1997) and word similarity rating (Miller and Charles, 1991; Finkelstein et al., 2002) involve choosing from amongst a set of categories—acts in a dialogue act taxonomy or values on a scale, respectively— which remains fixed for all items in the annotation task. In contrast, in tasks such as word sense labelling (Kilgarriff and Palmer, 2000; Palmer et al., 2007; Venhuizen et al., 2013) and PP-attachment annotation (Rosenthal et al., 2010; Jha et al., 2010) coders need to choose a category amongst a set of options specific to each item—the possible senses of each word or the possible attachment points in each sentence with a prepositional phrase. In either case (one set of categories for all items vs. item-specific sets of categories), annotators are typically asked to identify, for each item, the category they consider the best match. In addition, they may be given the opportunity to indicate that they cannot judge (the “don’t know” or “unclear” category). For large-scale annotation projects run over the Internet it is furthermore very likely that an annotator will not be confronted with every single item, and it makes sense to distinguish items not seen by the annotator from items labelled as “don’t know”. We refer to this form of annotation, i.e., an annotation task where coders have the option to (i) label items with one of the available categories, to (ii) choose “don’t know”, or to (iii) not label an item at all, as plain annotation. Plain annotation is the most common form of annotation and it is the one we shall focus on in this paper. However, other, more complex, forms of annotation are also possible and of interest. For instance, we may ask coders to rank the available categories (resulting in, say, a weak or partial order over the categories); we may ask them to provide a qualitative ratings of the available categories for each item (e.g., excellent match, good match, etc.); or we may ask for quantitative ratings (e.g., numbers from 1 to 100).1 We refer to these forms of annotation as complex annotation. We want to investigate how to aggregate the information available for each item once annotations by multiple annotators have been collected. In line with the terminology used in social choice theory and particularly judgment aggregation (Ar1Some authors have combined qualitative and quantitative ratings; e.g., for the Graded Word Sense dataset of Erk et al. (2009) coders were asked to classify each relevant WordNet sense for a given item on a 5-point scale: 1 completely different, 2 mostly different, 3 similar, 4 very similar, 5 identical. row, 1963; List and Pettit, 2002), let us call an aggregation method independent if the outcome regarding a given item j only depends on the categories provided by the annotators regarding j itself (but not on, say, the categories assigned to a different item j′). Independent aggregation methods are attractive due to their simplicity. They also have some conceptual appeal: when deciding on j maybe we should only concern ourselves with what people have to say regarding j? On the other hand, insisting on independence prevents us from exploiting potentially useful information that cuts across items. For instance, if a particular annotator almost always chooses category c, then we should maybe give less weight to her selecting c for the item j at hand than when some other annotator chooses c for j. This would call for methods that do not respect independence, which we shall refer to as general aggregation. Note that when studying independent aggregation methods, without loss of generality, we may assume that each annotation task consists of just a single item. In view of our discussion above, there are four classes of approaches to collective annotation: (1) Independent aggregation of plain annotations. This is the simplest case, resulting in a fairly limited design space. When, for a given item, each annotator has to choose between k categories (or abstain) and we do not permit ourselves to use any other information, then the only reasonable choice is to implement the plurality rule (Taylor, 2005), under which the winning category is the category chosen by the largest number of annotators. In case there are exactly two categories available, the plurality rule is also called the majority rule. The only additional consideration to make here (besides how to deal with ties) is whether or not we may want to declare no winner at all in case the plurality winner does not win by a sufficiently significant margin or does not make a particular quota. This is the most common approach in the literature (see, e.g., Venhuizen et al., 2013). (2) Independent aggregation of complex annotations. This is a natural generalisation of the first approach, resulting in a wider range of possible methods. We shall not explore it here, but only point out that in case annotators provide linear orders over categories, there is a close resemblance to classical voting the540 ory (Taylor, 2005); in case only partial orders can be elicited, recent work in computational social choice on the generalisation of classical voting rules may prove helpful (Pini et al., 2009; Endriss et al., 2009); and in case annotators rate categories using qualitative expressions such as excellent match, the method of majority judgment of Balinski and Laraki (2011) should be considered. (3) General aggregation of plain annotations. This is the approach we shall discuss below. It is related to voting in combinatorial domains studied in computational social choice (Chevaleyre et al., 2008), and to both binary aggregation (Dokow and Holzman, 2010; Grandi and Endriss, 2011) and judgment aggregation (List and Pettit, 2002). (4) General aggregation of complex annotations. While appealing due to its great level of generality, this approach can only be tackled successfully once approaches (2) and (3) are sufficiently well understood. 3 Formal Model Next we present our model for general aggregation of plain annotations into a collective annotation. 3.1 Terminology and Notation An annotation task is defined in terms of m items, with each item j ∈{1, . . . , m} being associated with a finite set of possible categories Cj. Annotators are asked to provide an answer for each of the items of the annotation task. In the context of plain annotations, a valid answer for item j is an element of the set Aj = Cj ∪{?, ⊥}.2 Here ? represents the answer “don’t know” and we use ⊥ to indicate that the annotator has not answered (or even seen) the item at all. An annotation is a vector of answers by one annotator, one answer for each item of the annotation task at hand, i.e., an annotation is an element of the Cartesian product A = A1 × A2 × · · · × Am. A typical element of A will be denoted as A = (a1, . . . , am). Let N = {1, . . . , n} be a finite set of n annotators (or coders). A profile A = (A1, . . . , An) ∈ An, for a given annotation task, is a vector of annotations, one for each annotator. That is, A is an 2As discussed earlier, in the context of complex annotations, an answer could also be, say, a partial order on Cj or a function associating elements of Cj with numerical ratings. Item 1 Item 2 Item 3 Annotator 1 B A A Annotator 2 B B B Annotator 3 A B A Majority B B A Table 1: A profile with a collective annotation. n × m-matrix; e.g., a3,7 is the answer that the 3rd annotator provides for the 7th item. We want to aggregate the information provided by the annotators into a (single) collective annotation. For the sake of simplicity, we use A also as the domain of possible collective annotations (even though the distinction between ? and ⊥may not be strictly needed here; they both indicate that we do not want to commit to any particular category). An aggregator is a function F : An →A, mapping any given profile into a collective annotation, i.e., a labelling of the items in the annotation task with corresponding categories (or ? or ⊥). An example is the plurality rule (also known as the majority rule for binary tasks with |Cj| = 2 for all items j), which annotates each item with the category chosen most often. Note that the collective annotation need not coincide with any of the individual annotations. Take, for example, a binary annotation task in which three coders label three items with category A or B as shown in Table 1. Here using the majority rule to aggregate the annotations would result in a collective annotation that does not fully match any annotation by an individual coder. 3.2 Basic Properties A typical task in social choice theory is to formulate axioms that formalise specific desirable properties of an aggregator F (Arrow et al., 2002). Below we adapt three of the most basic axioms that have been considered in the social choice literature to our setting and we briefly discuss their relevance to collective annotation tasks. We will require some additional notation: for any profile A, item j, and possible answer a ∈Aj, let NA j:a denote the set of annotators who chose answer a for item j under profile A. • F is anonymous if it treats coders symmetrically, i.e., if for every permutation π : N →N, F(A1, . . . , An) = F(Aπ(1), . . . , Aπ(n)). In social choice theory, this is a fairness constraint. For us, fairness per se is not a desideratum, 541 but when we do not have any a priori information regarding the expertise of annotators, then anonymity is a natural axiom to adopt. • F is neutral if it treats all items symmetrically, i.e., if for every two items j and j′ with the same set of possible categories (i.e., with Cj = Cj′) and for every profile A, it is the case that whenever NA j:a = NA j′:a for all answers a ∈Aj = Aj′, then F(A)j = F(A)j′. That is, if the patterns of individual annotations of j and j′ are the same, then also their collective annotation should coincide. In social choice theory, neutrality is also considered a basic fairness requirement (avoiding preferential treatment one candidate in an election). In the context of collective annotation there may be good reasons to violate neutrality: e.g., we may use an aggregator that assigns different default categories to different items and that can override such a default decision only in the presence of a significant majority (note that this is different from anonymity: we will often not have any information on our annotators, but we may have tangible information on items).3 • F is independent if the collective annotation of any given item j only depends on the individual annotations of j. Formally, F is independent if, for every item j and every two profiles A and A′, it is the case that whenever NA j:a = NA′ j:a for all answers a ∈Aj, then F(A)j = F(A′)j. In social choice theory, independence is often seen as a desirable albeit hard (or even impossible) to achieve property (Arrow, 1963). For collective annotation, we strongly believe that it is not a desirable property: by considering how annotators label other items we can learn about their biases and we should try to exploit this information to obtain the best possible annotation for the item at hand. Note that the plurality/majority rule is independent. All of the methods we shall propose in Section 4 are both anonymous and neutral—except to the extent to which we have to violate basic symmetry requirements in order to break ties between categories chosen equally often for a given item. None of our aggregators is independent. 3It would also be of interest to formulate a neutrality axiom w.r.t. categories (rather than items). For two categories, this idea has been discussed under the name of domainneutrality in the literature (Grandi and Endriss, 2011), but for larger sets of categories it has not yet been explored. Some annotation tasks might be subject to integrity constraints that determine the internal consistency of an annotation. For example, if our items are pairs of words and the possible categories include synonymous and antonymous, then if item 1 is about words A and B, item 2 about words B and C, and item 3 about words A and C, then any annotation that labels items 1 and 2 as synonymous should not label item 3 as antonymous. Thus, a further desirable property that will play a role for some annotation tasks is collective rationality (Grandi and Endriss, 2011): if all individual annotations respect a given integrity constraint, then so should the collective annotation. We can think of integrity constraints as imposing top-down expert knowledge on an annotation. However, for some annotation tasks, no integrity constraints may be known to us in advance, even though we may have reasons to believe that the individual annotators do respect some such constraints. In that case, selecting one of the individual annotations in the profile as the collective annotation is the only way to ensure that these integrity constraints will be satisfied by the collective annotation (Grandi and Endriss, 2011). Of course, to do so we would need to assume that there is at least one annotator who has labelled all items (and to be able to design a high-quality aggregator in this way we should have a sufficiently large number of such annotators to choose from), which may not always be possible, particularly in the context of crowdsourcing. 4 Three Families of Aggregators In this section we instantiate our formal model by proposing three families of methods for aggregation. Each of them is inspired, in part, by standard approaches to desigining aggregation rules developed in social choice theory and, in part, by the specific needs of collective annotation. Regarding the latter point, we specifically emphasise the fact that not all annotators can be expected to be equally reliable (in general or w.r.t. certain items) and we try to integrate the process of aggregation with a process whereby less reliable annotators are either given less weight or are excluded altogether. 4.1 Bias-Correcting Majority Rules We first want to explore the following idea: If a given annotator annotates most items with 0, then we might want to assign less significance to that 542 choice for any particular item.4 That is, if an annotator appears to be biased towards a particular category, then we might want to try to correct for this bias during aggregation. What follows applies only to annotation tasks where every item is associated with the same set of categories. For ease of exposition, let us furthermore assume that there are only two categories, 0 and 1, and that annotators do not make use of the option to annotate with ? (“don’t know”). For every annotator i ∈N and every category X ∈{0, 1}, fix a weight wX i ∈R. The bias-correcting majority (BCM) rule for this family of weights is defined as follows. Given profile A, the collective category for item j will be 1 in case P ai,j=1 w1 i > P ai,j=0 w0 i , and 0 otherwise.5 That is, we compute the overall weight for category 1 by adding up the corresponding weights for those coders that chose 1 for item j, and we do accordingly for the overall weight for category 0; finally, we choose as collective category that category with the larger overall weight. Note that for wX i ≡1 we obtain the simple majority rule. Below we define three intuitively appealing families of weights, and thereby three BCM rules. However, before we do so, we first require some additional notation. Fix a profile of annotations. For X ∈{0, 1}, let Freqi(X) denote the relative frequency with which annotator i has chosen category X. For instance, if i has annotated 20 items and has chosen 1 in five cases, then Freqi(1) = 0.25. Similarly, let Freq(X) denote the frequency of X across the entire profile. Here are three ways of making the intuitive idea of bias correction concrete: (1) The complement-based BCM rule (ComBCM) is defined by weights wX i = Freqi(1−X). That is, the weight of annotator i for category X is equal to her relative frequency of having chosen the other category 1−X. For example, if you annotate two items with 1 and eight with 0, then each of your 1-annotations will have weight 0.8, while each of your 0-annotations will only have weight 0.2. (2) The difference-based BCM rule (DiffBCM) is defined by weights wX i = 1 + Freq(X) − 4A similar idea is at the heart of cumulative voting, which requires a voter to distribute a fixed number of points amongst the candidates (Glasser, 1959; Brams and Fishburn, 2002). 5For the sake of simplicity, our description here presupposes that ties are always broken in favour of 0. Other tiebreaking rules (e.g., random tie-breaking) are possible. Freqi(X). Recall that Freq(X) is the relative frequency of X in the entire profile, while Freqi(X) is the relative frequency of X in the annotation of i. Hence, if i assigns category X less often than the general population, then her weight on X-choices will be increased by the difference (and vice versa in case she assigns X more often than the population at large). For example, if you assign 1 in two out of ten cases, while in general category 1 appears in exactly 50% of all annotations, then your weight for a choice of 1 will be 1 + 0.5 −0.2 = 1.3, while you weight for a choice of 0 will only be 0.7. (3) The relative BCM rule (RelBCM) is defined by weights wX i = Freq(X) Freqi(X). The idea is very similar to the DiffBCM rule. For the example given above, your weight for a choice of 1 would be 0.5/0.2 = 2.5, while your weight for a choice of 0 would be 0.5/0.8 = 0.625. The main difference between the ComBCM rule and the other two rules is that the former only takes into account the possible bias of individual annotators, while the latter two factor in as well the possible skewness of the data (as reflected by the labelling behaviour of the full set of annotators). In addition, while ComBCM is specific to the case of two categories, DiffBCM and RelBCM immediately generalise to any number of categories. In this case, we add up the categoryspecific weights as before and then choose the category with maximal support (i.e., we generalise the majority rule underlying the family of BCM rules to the plurality rule). We stress that our bias-correcting majority rules do not violate anonymity (nor neutrality for that matter). If we were to give less weight to a given annotator based on, say, her name, this would constitute a violation of anonymity; if we do so due to properties of the profile at hand and if we do so in a symmetric manner, then it does not. 4.2 Greedy Consensus Rules Now consider the following idea: If for a given item there is almost complete consensus amongst those coders that annotated it with a proper category (i.e., those who did not choose ? or ⊥), then we should probably adopt their choice for the collective annotation. Indeed, most aggregators will make this recommendation. Furthermore, the fact that there is almost full consensus for one item 543 may cast doubts on the reliability of coders who disagree with this near-consensus choice and we might want to disregard their views not only w.r.t. that item but also as far as the annotation of other items is concerned. Next we propose a family of aggregators that implement this idea. For simplicity, suppose that the only proper categories available are 0 and 1 and that annotators do not make use of ? (but it is easy to generalise to arbitrary numbers of categories and scenarios where different items are associated with different categories). Fix a tolerance value t ∈{0, . . . , m}. The greedy consensus rule GreedyCRt works as follows. First, initialise the set N ⋆with the full population of annotators N. Then iterate the following two steps: (1) Find the item with the strongest majority for either 0 or 1 amongst coders in N ⋆and lock in that value for the collective annotation. (2) Eliminate all coders from N ⋆who disagree on more than t items with the values locked in for the collective annotation so far. Repeat this process until the categories for all m items have been settled.6 We may think of this as a “greedy” way of identifying a coalition N ⋆with high inter-annotator agreement and then applying the majority rule to this coalition to obtain the collective annotation. To be precise, the above is a description of an entire family of aggregators: Whenever there is more than one item with a majority of maximal strength, we could choose to lock in any one of them. Also, when there is a split majority between annotators in N ⋆voting 0 and those voting 1, we have to use a tie-breaking rule to make a decision. Additional heuristics may be used to make these local decisions, or they may be left to chance. Note that in case t = m, GreedyCRt is simply the majority rule (as no annotator will ever get eliminated). In case t = 0, we end up with a coalition of annotators that unanimously agree with all of the categories chosen for the collective annotation. However, this coalition of perfectly aligned 6There are some similarities to Tideman’s Ranked Pairs method for preference aggregation (Tideman, 1987), which works by fixing the relative rankings of pairs of alternatives in order of the strength of the supporting majorities. In preference aggregation (unlike here), the population of voters is not reduced in the process; instead, decisions against the majority are taken whenever this is necessary to guarantee the transitivity of the resulting collective preference order. annotators need not be the largest such coalition (due to the greedy nature of our rule). Note that greedy consensus rules, as defined here, are both anonymous and neutral. Specifically, it is important not to confuse possible skewness of the data with a violation of neutrality of the aggregator. 4.3 Distance-based Aggregation Our third approach is based on the notion of distance. We first define a metric on choices to be able to say how distant two choices are. This induces an aggregator that, for a given profile, returns a collective choice that minimises the sum of distances to the individual choices in the profile.7 This opens up a wide range of possibilities; we only sketch some of them here. A natural choice is the adjusted Hamming distance H : A×A →R⩾0, which counts how many items two annotations differ on: H(A, A′) = m X j=1 δ(aj, a′ j) Here δ is the adjusted discrete distance defined as δ(x, y) = 0 if x = y or x ∈{?, ⊥} or y ∈{?, ⊥}, and as δ(x, y) = 1 in all other cases.8 Once we have fixed a distance d on A (such as H), this induces an aggregator Fd: Fd(A) = argmin A∈A n X i=1 d(A, Ai) To be precise, Fd is an irresolute aggregator that might return a set of best annotations with minimal distance to the profile. Note that FH is simply the plurality rule. This is so because every element of the Cartesian product is a possible annotation. In the presence of integrity constraints excluding some combinations, however, a distance-based rule allows for more sophisticated forms of aggregation (by choosing the optimal annotation w.r.t. all feasible annotations). We may also try to restrict the computation of distances to a subset of “reliable” annotators. Consider the following idea: If a group of annotators is (fairly) reliable, then they should have a 7This idea has been used in voting (Kemeny, 1959), belief merging (Konieczny and Pino P´erez, 2002), and judgment aggregation (Miller and Osherson, 2009). 8This δ, divided by m, is the same thing as what Artstein and Poesio (2008) call the agreement value agrj for item j. 544 (fairly) high inter-annotator agreement. By this reasoning, we should choose a group of annotators ANN ⊆N that maximises inter-annotator agreement in ANN and work with the aggregator argminA∈A P i∈ANN d(A, Ai). But this is too simplistic: any singleton ANN = {i} will result in perfect agreement. That is, while we can easily maximise agreement, doing so in a na¨ıve way means ignoring most of the information collected. In other words, we face the following dilemma: • On the one hand, we should choose a small set ANN (i.e., select few annotators to base our collective annotation on), as that will allow us to increase the (average) reliability of the annotators taken into account. • On the other hand, we should choose a large set ANN (i.e., select many annotators to base our collective annotation on), as that will increase the amount of information exploited. One pragmatic approach is to fix a minimum quality threshold regarding one of the two dimensions and optimise in view of the other.9 5 A Case Study In this section, we report on a case study in which we have tested our bias-correcting majority and greedy consensus rules.10 We have used the dataset created by Snow et al. (2008) for the task of recognising textual entailment, originally proposed by Dagan et al. (2006) in the PASCAL Recognizing Textual Entailment (RTE) Challenge. RTE is a binary classification task consisting in judging whether the meaning of a piece of text (the so-called hypothesis) can be inferred from another piece of text (the entailing text). The original RTE1 Challenge testset consists of 800 text-hypothesis pairs (such as T: “Chr´etien visited Peugeot’s newly renovated car factory”, H: “Peugeot manufactures cars”) with a gold standard annotation that classifies each item as either true (1)—in case H can be inferred from T— or false (0). Exactly 400 items are annotated as 0 and exactly 400 as 1. Bos and Markert (2006) performed an independent expert annotation of 9GreedyCRt is a greedy (rather than optimal) implementation of this basic idea, with the tolerance value t fixing a threshold on (a particular form of) inter-annotator agreement. 10Since the annotation task and dataset used for our case study do not involve any interesting integrity constraints, we have not tested any distance-based aggregation rules. this testset, obtaining 95% agreement between the RTE1 gold standard and their own annotation. The dataset of Snow et al. (2008) includes 10 non-expert annotations for each of the 800 items in the RTE1 testset, collected with Amazon’s Mechanical Turk. A quick examination of the dataset shows that there are a total of 164 annotators who have annotated between 20 items (124 annotators) and 800 items each (only one annotator). Nonexpert annotations with category 1 (rather than 0) are slightly more frequent (Freq(1) ≈0.57). We have applied our aggregators to this data and compared the outcomes with each other and to the gold standard. The results are summarised in Table 2 and discussed in the sequel. For each pair we report the observed agreement Ao (proportion of items on which two annotations agree) and, in brackets, Cohen’s kappa κ = Ao−Ae 1−Ae , with Ae being the expected agreement for independent annotators (Cohen, 1960; Artstein and Poesio, 2008). Note that there are several variants of the majority rule, depending on how we break ties. In Table 2, Maj1≻0 is the majority rule that chooses 1 in case the number of annotators choosing 1 is equal to the number of annotators choosing 0 (and accordingly for Maj0≻1). For 65 out of the 800 items there has been a tie (i.e., five annotators choose 0 and another five choose 1). This means that the tiebreaking rule used can have a significant impact on results. Snow et al. (2008) work with a majority rule where ties are broken uniformly at random and report an observed agreement (accuracy) between the majority rule and the gold standard of 89.7%. This is confirmed by our results: 89.7% is the mean of 87.5% (our result for Maj1≻0) and 91.9% (our result for Maj0≻1). If we break ties in the optimal way (in view of approximating the gold standard (which of course would not actually be possible without having access to that gold standard), then we obtain an observed agreement of 93.8%, but if we are unlucky and ties happen to get broken in the worst possible way, we obtain an observed agreement of only 85.6%. For none of our bias-correcting majority rules did we encounter any ties. Hence, for these aggregators the somewhat arbitrary choices we have to make when breaking ties are of no significance, which is an important point in their favour. Observe that all of the bias-correcting majority rules approximate the gold standard better than the majority rule with uniformly random tie-breaking. 545 Annotation Maj1≻0 Maj0≻1 ComBCM DiffBCM RelBCM GreedyCR0 GreedyCR15 Gold Standard 87.5% (.75) 91.9% (.84) 91.1% (.80) 91.5% (.81) 90.8% (.80) 86.6% (.73) 92.5% (.85) Maj1≻0 91.9% (.84) 88.9% (.76) 94.3% (.87) 94.0% (.87) 87.6% (.75) 91.5% (.83) Maj0≻1 96.0% (.91) 97.6% (.95) 96.9% (.93) 89.0% (.78) 96.1% (.92) ComBCM 94.6% (.86) 94.4% (.86) 88.8% (.75) 93.9% (.86) DiffBCM 98.8% (.97) 88.6% (.75) 94.8% (.88) RelBCM 88.4% (.74) 93.8% (.86) GreedyCR0 90.6% (.81) Table 2: Observed agreement (and κ) between collective annotations and the gold standard. Recall that the greedy consensus rule is in fact a family of aggregators: whenever there is more than one item with a maximal majority, we may lock in any one of them. Furthermore, when there is a split majority, then ties may be broken either way. The results reported here refer to an implementation that always chooses the lexicographically first item amongst all those with a maximal majority and that breaks ties in favour of 1. These parameters yield neither the best or the worst approximations of the gold standard. We tested a range of tolerance values. As an example, Table 2 includes results for tolerance values 0 and 15. The coalition found for tolerance 0 consists of 46 annotators who all completely agree with the collective annotation; the coalition found for tolerance 15 consists of 156 annotators who all disagree with the collective annotation on at most 15 items. While GreedyCR0 appears to perform rather poorly, GreedyCR15 approximates the gold standard particularly well. This is surprising and suggests, on the one hand, that eliminating only the most extreme outlier annotators is a useful strategy, and on the other hand, that a high-quality collective annotation can be obtained from a group of annotators that disagree substantially.11 6 Related Work There is an increasing number of projects using crowdsourcing methods for labelling data. Online Games with a Purpose, originally conceived by von Ahn and Dabbish (2004) to annotate images, have been used for a variety of linguistic tasks: Lafourcade (2007) created JeuxDeMots to develop a semantic network by asking players to label words with semantically related words; Phrase Detectives (Chamberlain et al., 2008) has been used to gather annotations on anaphoric coreference; and more recently Basile et al. (2012) 11Recall that 124 out of 164 coders only annotated 20 items each; a tolerance value of 15 thus is fairly lenient. have developed the Wordrobe set of games for annotating named entities, word senses, homographs, and pronouns. Similarly, crowdsourcing via microworking sites like Amazon’s Mechanical Turk has been used in several annotation experiments related to tasks such as affect analysis, event annotation, sense definition and word sense disambiguation (Snow et al., 2008; Rumshisky, 2011; Rumshisky et al., 2012), amongst others.12 All these efforts face the problem of how to aggregate the information provided by a group of volunteers into a collective annotation. However, by and large, the emphasis so far has been on issues such as experiment design, data quality, and costs, with little attention being paid to the aggregation methods used, which are typically limited to some form of majority vote (or taking averages if the categories are numeric). In contrast, our focus has been on investigating different aggregation methods for arriving at a collective annotation. Our work has connections with the literature on inter-annotator agreement. Agreement scores such as kappa are used to assess the quality of an annotation but do not play a direct role in constructing one single annotation from the labellings of several coders.13 The methods we have proposed, in contrast, do precisely that. Still, agreement plays a prominent role in some of these methods. In our discussion of distance-based aggregation, we suggested how agreement can be used to select a subset of annotators whose individual annotations are minimally distant from the resulting collective annotation. Our greedy consensus rule also makes use of agreement to ensure a minimum level of consensus. In both cases, the aggregators have the effect of disregarding some outlier annotators. 12See also the papers presented at the NAACL 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk (tinyurl.com/amtworkshop2010). 13Creating a gold standard often involves adjudication of disagreements by experts, or even the removal of cases with disagreement from the dataset. See, e.g., the papers cited by Beigman Klebanov and Beigman (2009). 546 Other researchers have explored ways to directly identify “low-quality” annotators. For instance, Snow et al. (2008) and Raykar et al. (2010) propose Bayesian methods for identifying and correcting annotators’ biases, while Ipeirotis et al. (2010) propose an algorithm for assigning a quality score to annotators that distinguishes intrinsic error rate from an annotator’s bias. In our approach, we do not directly rate annotators or recalibrate their annotations—rather, some outlier annotators get to play a marginal role in the resulting collective annotation as a side effect of the aggregation methods themselves. Although in our case study we have tested our aggregators by comparing their outcomes to a gold standard, our approach to collective annotation itself does not assume that there is in fact a ground truth. Instead, we view collective annotations as reflecting the views of a community of speakers.14 This contrasts significantly with, for instance, the machine learning literature, where there is a focus on estimating the hidden true label from a set of noisy labels using maximum-likelihood estimators (Dawid and Skene, 1979; Smyth et al., 1995; Raykar et al., 2010). In application domains where it is reasonable to assume the existence of a ground truth and where we are able to model the manner in which individual judgments are being distorted relative to this ground truth, social choice theory provides tools (using again maximum-likelihood estimators) for the design of aggregators that maximise chances of recovering the ground truth for a given model of distortion (Young, 1995; Conitzer and Sandholm, 2005). In recent work, Mao et al. (2013) have discussed the use of these methods in the context of crowdsourcing. Specifically, they have designed an experiment in which the ground truth is defined unambiguously and known to the experiment designer, so as to be able to extract realistic models of distortion from the data collected in a crowdsourcing exercise. 7 Conclusions We have presented a framework for combining the expertise of speakers taking part in large-scale 14In some domains, such as medical diagnosis, it makes perfect sense to assume that there is a ground truth. However, in tasks related to linguistic knowledge and language use such an assumption seems far less justified. Hence, a collective annotation may be the closest we can get to a representation of the linguistic knowledge/use of a linguistic community. annotation projects. Such projects are becoming more and more common, due to the availability of online crowdsourcing methods for data annotation. Our work is novel in several respects. We have drawn inspiration from the field of social choice theory to formulate a general formal model for aggregation problems, which we believe sheds light on the kind of issues that arise when trying to build annotated linguistic resources from a potentially large group of annotators; and we have proposed several families of concrete methods for aggregating individual annotations that are more fine-grained that the standard majority rule that so far has been used across the board. We have tested some of our methods on a gold standard testset for the task of recognising textual entailment. Our aim has been conceptual, namely to point out that it is important for computational linguists to reflect on the methods used when aggregating annotation information. We believe that social choice theory offers an appropriate general methodology for supporting this reflection. Importantly, this does not mean that the concrete aggregation methods developed in social choice theory are immediately applicable or that all the axioms typically studied in social choice theory are necessarily relevant to aggregating linguistic annotations. Rather, what we claim is that it is the methodology of social choice theory which is useful: to formally state desirable properties of aggregators as axioms and then to investigate which specific aggregators satisfy them. To put it differently: at the moment, researchers in computational linguistics simply use some given aggregation methods (almost always the majority rule) and judge their quality on how they fare in specific experiments—but there is no principled reflection on the methods themselves. We believe that this should change and hope that the framework outlined here can provide a suitable starting point. In future work, the framework we have presented here should be tested more extensively, not only against a gold standard but also in terms of the usefulness of the derived collective annotations for training supervised learning systems. On the theoretial side, it would be interesting to study the axiomatic properties of the methods of aggregation we have proposed here in more depth and to define axiomatic properties of aggregators that are specifically tailored to the task of collective annotation of linguistic resources. 547 References James Allen and Mark Core, 1997. DAMSL: Dialogue Act Markup in Several Layers. Discourse Resource Initiative. Kenneth J. Arrow, Armatya K. Sen, and Kotaro Suzumura, editors. 2002. Handbook of Social Choice and Welfare. North-Holland. Kenneth J. Arrow. 1963. Social Choice and Individual Values. John Wiley and Sons, 2nd edition. First edition published in 1951. Ron Artstein and Massimo Poesio. 2008. Inter-coder agreement for computational linguistics. Computational Linguistics, 34(4):555–596. Michel Balinski and Rida Laraki. 2011. Majority Judgment: Measuring, Ranking, and Electing. MIT Press. Valerio Basile, Johan Bos, Kilian Evang, and Noortje Venhuizen. 2012. A platform for collaborative semantic annotation. In Proc. 13th Conference of the European Chapter of the Association for Computational Linguistics (EACL-2012), pages 92–96. Beata Beigman Klebanov and Eyal Beigman. 2009. From annotator agreement to noise models. Computational Linguistics, 35(4):495–503. Johan Bos and Katja Markert. 2006. Recognising textual entailment with robust logical inference. In Machine Learning Challenges, volume 3944 of LNCS, pages 404–426. Springer-Verlag. Steven J. Brams and Peter C. Fishburn. 2002. Voting procedures. In Kenneth J. Arrow, Armartya K. Sen, and Kotaro Suzumura, editors, Handbook of Social Choice and Welfare. North-Holland. Jean Carletta, Stephen Isard, Anne H. Anderson, Gwyneth Doherty-Sneddon, Amy Isard, and Jacqueline C. Kowtko. 1997. The reliability of a dialogue structure coding scheme. Computational Linguistics, 23:13–31. Jon Chamberlain, Massimo Poesio, and Udo Kruschwitz. 2008. Addressing the resource bottleneck to create large-scale annotated texts. In Semantics in Text Processing. STEP 2008 Conference Proceedings, volume 1 of Research in Computational Semantics, pages 375–380. College Publications. Yann Chevaleyre, Ulle Endriss, J´erˆome Lang, and Nicolas Maudet. 2008. Preference handling in combinatorial domains: From AI to social choice. AI Magazine, 29(4):37–46. Jacob Cohen. 1960. A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20:37–46. Vincent Conitzer and Tuomas Sandholm. 2005. Common voting rules as maximum likelihood estimators. In Proc. 21st Conference on Uncertainty in Artificial Intelligence (UAI-2005). Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The PASCAL recognising textual entailment challenge. In Machine Learning Challenges, volume 3944 of LNCS, pages 177–190. SpringerVerlag. Alexander Philip Dawid and Allan M. Skene. 1979. Maximum likelihood estimation of observer errorrates using the EM algorithm. Applied Statistics, 28(1):20–28. Elad Dokow and Ron Holzman. 2010. Aggregation of binary evaluations. Journal of Economic Theory, 145(2):495–511. Ulle Endriss, Maria Silvia Pini, Francesca Rossi, and K. Brent Venable. 2009. Preference aggregation over restricted ballot languages: Sincerity and strategy-proofness. In Proc. 21st International Joint Conference on Artificial Intelligence (IJCAI-2009). Katrin Erk, Diana McCarthy, and Nicholas Gaylord. 2009. Investigations on word senses and word usages. In Proc. 47th Annual Meeting of the Association for Computational Linguistics (ACL-2009), pages 10–18. Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. 2002. Placing search in context: The concept revisited. ACM Transactions on Information Systems, 20(1):116–131. Gerald J. Glasser. 1959. Game theory and cumulative voting for corporate directors. Management Science, 5(2):151–156. Umberto Grandi and Ulle Endriss. 2011. Binary aggregation with integrity constraints. In Proc. 22nd International Joint Conference on Artificial Intelligence (IJCAI-2011). Panagiotis G. Ipeirotis, Foster Provost, and Jing Wang. 2010. Quality Management on Amazon Mechanical Turk. In Proc. 2nd Human Computation Workshop (HCOMP-2010). Mukund Jha, Jacob Andreas, Kapil Thadani, Sara Rosenthal, and Kathleen McKeown. 2010. Corpus creation for new genres: A crowdsourced approach to PP attachment. In Proc. NAACL-HLT Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk, pages 13–20. John Kemeny. 1959. Mathematics without numbers. Daedalus, 88:577–591. Adam Kilgarriff and Martha Palmer. 2000. Introduction to the special issue on senseval. Computers and the Humanities, 34(1):1–13. S´ebastien Konieczny and Ram´on Pino P´erez. 2002. Merging information under constraints: A logical framework. Journal of Logic and Computation, 12(5):773–808. 548 Mathieu Lafourcade. 2007. Making people play for lexical acquisition with the JeuxDeMots prototype. In Proc. 7th International Symposium on Natural Language Processing. Christian List and Philip Pettit. 2002. Aggregating sets of judgments: An impossibility result. Economics and Philosophy, 18(1):89–110. Andrew Mao, Ariel D. Procaccia, and Yiling Chen. 2013. Better human computation through principled voting. In Proc. 27th AAAI Conference on Artificial Intelligence. George A. Miller and Walter G. Charles. 1991. Contextual correlates of semantic similarity. Language and Cognitive Processes, 6(1):1–28. Michael K. Miller and Daniel Osherson. 2009. Methods for distance-based judgment aggregation. Social Choice and Welfare, 32(4):575–601. Martha Palmer, Hoa Trang Dang, and Christiane Fellbaum. 2007. Making fine-grained and coarse-grained sense distinctions, both manually and automatically. Natural Language Engineering, 13(2):137–163. Maria Silvia Pini, Francesca Rossi, K. Brent Venable, and Toby Walsh. 2009. Aggregating partially ordered preferences. Journal of Logic and Computation, 19(3):475–502. Vikas Raykar, Shipeng Yu, Linda Zhao, Gerardo Hermosillo Valadez, Charles Florin, Luca Bogoni, and Linda Moy. 2010. Learning from crowds. The Journal of Machine Learning Research, 11:1297–1322. Sara Rosenthal, William Lipovsky, Kathleen McKeown, Kapil Thadani, and Jacob Andreas. 2010. Towards semi-automated annotation for prepositional phrase attachment. In Proc. 7th International Conference on Language Resources and Evaluation (LREC-2010). Anna Rumshisky, Nick Botchan, Sophie Kushkuley, and James Pustejovsky. 2012. Word sense inventories by non-experts. In Proc. 8th International Conference on Language Resources and Evaluation (LREC-2012). Anna Rumshisky. 2011. Crowdsourcing word sense definition. In Proc. ACL-HLT 5th Linguistic Annotation Workshop (LAW-V). Padhraic Smyth, Usama Fayyad, Michael Burl, Pietro Perona, and Pierre Baldi. 1995. Inferring ground truth from subjective labelling of venus images. Advances in Neural Information Processing Systems, pages 1085–1092. Rion Snow, Brendan O’Connor, Daniel Jurafsky, and Andrew Y. Ng. 2008. Cheap and fast—but is it good? Evaluating non-expert annotations for natural language tasks. In Proc. Conference on Empirical Methods in Natural Language Processing (EMNLP2008), pages 254–263. Alan D. Taylor. 2005. Social Choice and the Mathematics of Manipulation. Cambridge University Press. T. Nicolaus Tideman. 1987. Independence of clones as a criterion for voting rules. Social Choice and Welfare, 4(3):185–206. Noortje Venhuizen, Valerio Basile, Kilian Evang, and Johan Bos. 2013. Gamification for word sense labeling. In Proc. 10th International Conference on Computational Semantics (IWCS-2013), pages 397– 403. Luis von Ahn and Laura Dabbish. 2004. Labeling images with a computer game. In Proc. SIGCHI Conference on Human Factors in Computing Systems, pages 319–326. ACM. H. Peyton Young. 1995. Optimal voting rules. Journal of Economic Perspectives, 9(1):51–64. 549
2013
53
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 550–560, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics ParGramBank: The ParGram Parallel Treebank Sebastian Sulger and Miriam Butt University of Konstanz, Germany {sebastian.sulger|miriam.butt}@uni-konstanz.de Tracy Holloway King eBay Inc., USA [email protected] Paul Meurer Uni Research AS, Norway [email protected] Tibor Laczk´o and Gy¨orgy R´akosi University of Debrecen, Hungary {laczko.tibor|rakosi.gyorgy}@arts.unideb.hu Cheikh Bamba Dione and Helge Dyvik and Victoria Ros´en and Koenraad De Smedt University of Bergen, Norway [email protected], {dyvik|victoria|desmedt}@uib.no Agnieszka Patejuk Polish Academy of Sciences [email protected] ¨Ozlem C¸ etino˘glu University of Stuttgart, Germany [email protected] I Wayan Arka* and Meladel Mistica+ *Australian National University and Udayana University, Indonesia +Australian National University [email protected], [email protected] Abstract This paper discusses the construction of a parallel treebank currently involving ten languages from six language families. The treebank is based on deep LFG (LexicalFunctional Grammar) grammars that were developed within the framework of the ParGram (Parallel Grammar) effort. The grammars produce output that is maximally parallelized across languages and language families. This output forms the basis of a parallel treebank covering a diverse set of phenomena. The treebank is publicly available via the INESS treebanking environment, which also allows for the alignment of language pairs. We thus present a unique, multilayered parallel treebank that represents more and different types of languages than are available in other treebanks, that represents deep linguistic knowledge and that allows for the alignment of sentences at several levels: dependency structures, constituency structures and POS information. 1 Introduction This paper discusses the construction of a parallel treebank currently involving ten languages that represent several different language families, including non-Indo-European. The treebank is based on the output of individual deep LFG (LexicalFunctional Grammar) grammars that were developed independently at different sites but within the overall framework of ParGram (the Parallel Grammar project) (Butt et al., 1999a; Butt et al., 2002). The aim of ParGram is to produce deep, wide coverage grammars for a variety of languages. Deep grammars provide detailed syntactic analysis, encode grammatical functions as well as 550 other grammatical features such as tense or aspect, and are linguistically well-motivated. The ParGram grammars are couched within the linguistic framework of LFG (Bresnan, 2001; Dalrymple, 2001) and are constructed with a set of grammatical features that have been commonly agreed upon within the ParGram group. ParGram grammars are implemented using XLE, an efficient, industrialstrength grammar development platform that includes a parser, a generator and a transfer system (Crouch et al., 2012). XLE has been developed in close collaboration with the ParGram project. Over the years, ParGram has continuously grown and includes grammars for Arabic, Chinese, English, French, German, Georgian, Hungarian, Indonesian, Irish, Japanese, Malagasy, Murrinh-Patha, Norwegian, Polish, Spanish, Tigrinya, Turkish, Urdu, Welsh and Wolof. ParGram grammars produce output that has been parallelized maximally across languages according to a set of commonly agreed upon universal proto-type analyses and feature values. This output forms the basis of the ParGramBank parallel treebank discussed here. ParGramBank is constructed using an innovative alignment methodology developed in the XPAR project (Dyvik et al., 2009) in which grammar parallelism is presupposed to propagate alignment across different projections (section 6). This methodology has been implemented with a drag-and-drop interface as part of the LFG Parsebanker in the INESS infrastructure (Ros´en et al., 2012; Ros´en et al., 2009). ParGramBank has been constructed in INESS and is accessible in this infrastructure, which also offers powerful search and visualization. In recent years, parallel treebanking1 has gained in importance within NLP. An obvious application for parallel treebanking is machine translation, where treebank size is a deciding factor for whether a particular treebank can support a particular kind of research project. When conducting in-depth linguistic studies of typological features, other factors such as the number of included languages, the number of covered phenomena, and the depth of linguistic analysis become more important. The treebanking effort reported on in this paper supports work of the latter focus, including efforts at multilingual dependency parsing (Naseem et al., 2012). We have 1Throughout this paper ‘treebank’ refers to both phrasestructure resources and their natural extensions to dependency and other deep annotation banks. created a parallel treebank whose prototype includes ten typologically diverse languages and reflects a diverse set of phenomena. We thus present a unique, multilayered parallel treebank that represents more languages than are currently available in other treebanks, and different types of languages as well. It contains deep linguistic knowledge and allows for the parallel and simultaneous alignment of sentences at several levels. LFG’s f(unctional)-structure encodes dependency structures as well as information that is equivalent to Quasi-Logical Forms (van Genabith and Crouch, 1996). LFG’s c(onstituent)-structure provides information about constituency, hierarchical relations and part-of-speech. Currently, ParGramBank includes structures for the following languages (with the ISO 639-3 code and language family): English (eng, Indo-European), Georgian (kat, Kartvelian), German (deu, Indo-European), Hungarian (hun, Uralic), Indonesian (ind, Austronesian), Norwegian (Bokm˚al) (nob, Indo-European), Polish (pol, Indo-European), Turkish (tur, Altaic), Urdu (urd, Indo-European) and Wolof (wol, NigerCongo). It is freely available for download under the CC-BY 3.0 license via the INESS treebanking environment and comes in two formats: a Prolog format and an XML format.2 This paper is structured as follows. Section 2 discusses related work in parallel treebanking. Section 3 presents ParGram and its approach to parallel treebanking. Section 4 focuses on the treebank design and its construction. Section 5 contains examples from the treebank, focusing on typological aspects and challenges for parallelism. Section 6 elaborates on the mechanisms for parallel alignment of the treebank. 2 Related Work There have been several efforts in parallel treebanking across theories and annotation schemes. Kuhn and Jellinghaus (2006) take a minimal approach towards multilingual parallel treebanking. They bootstrap phrasal alignments over a sentence-aligned parallel corpus of English, French, German and Spanish and report concrete treebank annotation work on a sample of sentences from the Europarl corpus. Their annotation 2http://iness.uib.no. The treebank is in the public domain (CC-BY 3.0). The use of the INESS platform itself is not subject to any licensing. To access the treebank, click on ‘Treebank selection’ and choose the ParGram collection. 551 scheme is the “leanest” possible scheme in that it consists solely of a bracketing for a sentence in a language (where only those units that play the role of a semantic argument or modifier in a larger unit are bracketed) and a correspondence relation of the constituents across languages. Klyueva and Mare˘cek (2010) present a small parallel treebank using data and tools from two existing treebanks. They take a syntactically annotated gold standard text for one language and run an automated annotation on the parallel text for the other language. Manually annotated Russian data are taken from the SynTagRus treebank (Nivre et al., 2008), while tools for parsing the corresponding text in Czech are taken from the TectoMT framework (Popel and ˇZabokrtsk´y, 2010). The SMULTRON project is concerned with constructing a parallel treebank of English, German and Swedish. The sentences have been POS-tagged and annotated with phrase structure trees. These trees have been aligned on the sentence, phrase and word level. Additionally, the German and Swedish monolingual treebanks contain lemma information. The treebank is distributed in TIGERXML format (Volk et al., 2010). Megyesi et al. (2010) discuss a parallel EnglishSwedish-Turkish treebank. The sentences in each language are annotated morphologically and syntactically with automatic tools, aligned on the sentence and the word level and partially handcorrected.3 A further parallel treebanking effort is ParTUT, a parallel treebank (Sanguinetti and Bosco, 2011; Bosco et al., 2012) which provides dependency structures for Italian, English and French and which can be converted to a CCG (Combinatory Categorial Grammar) format. Closest to our work is the ParDeepBank, which is engaged in the creation of a highly parallel treebank of English, Portuguese and Bulgarian. ParDeepBank is couched within the linguistic framework of HPSG (Head-Driven Phrase Structure Grammar) and uses parallel automatic HPSG grammars, employing the same tools and implementation strategies across languages (Flickinger et al., 2012). The parallel treebank is aligned on the sentence, phrase and word level. In sum, parallel treebanks have so far focused exclusively on Indo-European languages 3The paper mentions Hindi as the fourth language, but this is not yet available: http://stp.lingfil.uu. se/˜bea/turkiska/home-en.html. (with Turkish providing the one exception) and generally do not extend beyond three or four languages. In contrast, our ParGramBank treebank currently includes ten typologically different languages from six different language families (Altaic, Austronesian, Indo-European, Kartvelian, Niger-Congo, Uralic). A further point of comparison with ParDeepBank is that it relies on dynamic treebanks, which means that structures are subject to change during the further development of the resource grammars. In ParDeepBank, additional machinery is needed to ensure correct alignment on the phrase and word level (Flickinger et al., 2012, p. 105). ParGramBank contains finalized analyses, structures and features that were designed collaboratively over more than a decade, thus guaranteeing a high degree of stable parallelism. However, with the methodology developed within XPAR, alignments can easily be recomputed from f-structure alignments in case of grammar or feature changes, so that we also have the flexible capability of allowing ParGramBank to include dynamic treebanks. 3 ParGram and its Feature Space The ParGram grammars use the LFG formalism which produces c(onstituent)-structures (trees) and f(unctional)-structures as the syntactic analysis. LFG assumes a version of Chomsky’s Universal Grammar hypothesis, namely that all languages are structured by similar underlying principles (Chomsky, 1988; Chomsky, 1995). Within LFG, f-structures encode a language universal level of syntactic analysis, allowing for crosslinguistic parallelism at this level of abstraction. In contrast, c-structures encode language particular differences in linear word order, surface morphological vs. syntactic structures, and constituency (Dalrymple, 2001). Thus, while the Chomskyan framework is derivational in nature, LFG departs from this view by embracing a strictly representational approach to syntax. ParGram tests the LFG formalism for its universality and coverage limitations to see how far parallelism can be maintained across languages. Where possible, analyses produced by the grammars for similar constructions in each language are parallel, with the computational advantage that the grammars can be used in similar applications and that machine translation can be simplified. 552 The ParGram project regulates the features and values used in its grammars. Since its inception in 1996, ParGram has included a “feature committee”, which collaboratively determines norms for the use and definition of a common multilingual feature and analysis space. Adherence to feature committee decisions is supported technically by a routine that checks the grammars for compatibility with a feature declaration (King et al., 2005); the feature space for each grammar is included in ParGramBank. ParGram also conducts regular meetings to discuss constructions, analyses and features. For example, Figure 1 shows the c-structure of the Urdu sentence in (1) and the c-structure of its English translation. Figure 2 shows the fstructures for the same sentences. The left/upper c- and f-structures show the parse from the English ParGram grammar, the right/lower ones from Urdu ParGram grammar.4,5 The c-structures encode linear word order and constituency and thus look very different; e.g., the English structure is rather hierarchical while the Urdu structure is flat (Urdu is a free word-order language with no evidence for a VP; Butt (1995)). The f-structures, in contrast, are parallel aside from grammar-specific characteristics such as the absence of grammatical gender marking in English and the absence of articles in Urdu.6 (1) ? Aj J K. QºK QK A JK@ ú G àA‚» kisAn=nE apnA farmer.M.Sg=Erg self.M.Sg TrEkTar bEc-A tractor.M.Sg sell-Perf.M.Sg ‘Did the farmer sell his tractor?’ With parallel analyses and parallel features, maximal parallelism across typologically different languages is maintained. As a result, during the construction of the treebank, post-processing and conversion efforts are kept to a minimum. 4The Urdu ParGram grammar makes use of a transliteration scheme that abstracts away from the Arabic-based script; the transliteration scheme is detailed in Malik et al. (2010). 5In the c-structures, dotted lines indicate distinct functional domains; e.g., in Figure 1, the NP the farmer and the VP sell his tractor belong to different f-structures: the former maps onto the SUBJ f-structure, while the latter maps onto the topmost f-structure (Dyvik et al., 2009). Section 6 elaborates on functional domains. 6The CASE feature also varies: since English does not distinguish between accusative, dative, and other oblique cases, the OBJ is marked with a more general obl CASE. Figure 1: English and Urdu c-structures We emphasize the fact that ParGramBank is characterized by a maximally reliable, humancontrolled and linguistically deep parallelism across aligned sentences. Generally, the result of automatic sentence alignment procedures are parallel corpora where the corresponding sentences normally have the same purported meaning as intended by the translator, but they do not necessarily match in terms of structural expression. In building ParGramBank, conscious attention is paid to maintaining semantic and constructional parallelism as much as possible. This design feature renders our treebank reliable in cases when the constructional parallelism is reduced even at fstructure. For example, typological variation in the presence or absence of finite passive constructions represents a case of potential mismatch. Hungarian, one of the treebank languages, has no productive finite passives. The most common strategy in translation is to use an active construction with a topicalized object, with no overt subject and with 3PL verb agreement: (2) A f´a-t ki-v´ag-t-´ak. the tree-ACC out-cut-PAST-3PL ‘The tree was cut down.’ In this case, a topicalized object in Hungarian has to be aligned with a (topical) subject in English. Given that both the sentence level and the phrase level alignments are human-controlled in the treebank (see sections 4 and 6), the greatest possible parallelism is reliably captured even in such cases of relative grammatical divergence. 553 Figure 2: Parallel English and Urdu f-structures 4 Treebank Design and Construction For the initial seeding of the treebank, we focused on 50 sentences which were constructed manually to cover a diverse range of phenomena (transitivity, voice alternations, interrogatives, embedded clauses, copula constructions, control/raising verbs, etc.). We followed Lehmann et al. (1996) and Bender et al. (2011) in using coverage of grammatical constructions as a key component for grammar development. (3) lists the first 16 sentences of the treebank. An expansion to 100 sentences is scheduled for next year. (3) a. Declaratives: 1. The driver starts the tractor. 2. The tractor is red. b. Interrogatives: 3. What did the farmer see? 4. Did the farmer sell his tractor? c. Imperatives: 5. Push the button. 6. Don’t push the button. d. Transitivity: 7. The farmer gave his neighbor an old tractor. 8. The farmer cut the tree down. 9. The farmer groaned. e. Passives and traditional voice: 10. My neighbor was given an old tractor by the farmer. 11. The tree was cut down yesterday. 12. The tree had been cut down. 13. The tractor starts with a shudder. f. Unaccusative: 14. The tractor appeared. g. Subcategorized declaratives: 15. The boy knows the tractor is red. 16. The child thinks he started the tractor. The sentences were translated from English into the other treebank languages. Currently, these languages are: English, Georgian, German, Hungarian, Indonesian, Norwegian (Bokm˚al), Polish, Turkish, Urdu and Wolof. The translations were done by ParGram grammar developers (i.e., expert linguists and native speakers). The sentences were automatically parsed with ParGram grammars using XLE. Since the parsing was performed sentence by sentence, our resulting treebank is automatically aligned at the sentence level. The resulting c- and f-structures were banked in a database using the LFG Parsebanker (Ros´en et al., 2009). The structures were disambiguated either prior to banking using XLE or during banking with the LFG Parsebanker and its discriminant-based disambiguation technique. The banked analyses can be exported and downloaded in a Prolog format using the LFG Parsebanker interface. Within XLE, we automatically convert the structures to a simple XML format and make these available via ParGramBank as well. The Prolog format is used with applications which use XLE to manipulate the structures, e.g. for further semantic processing (Crouch and King, 2006) or for sentence condensation (Crouch et al., 2004). 554 5 Challenges for Parallelism We detail some challenges in maintaining parallelism across typologically distinct languages. 5.1 Complex Predicates Some languages in ParGramBank make extensive use of complex predicates. For example, Urdu uses a combination of predicates to express concepts that in languages like English are expressed with a single verb, e.g., ‘memory do’ = ‘remember’, ‘fear come’ = ‘fear’. In addition, verb+verb combinations are used to express permissive or aspectual relations. The strategy within ParGram is to abstract away from the particular surface morphosyntactic expression and aim at parallelism at the level of f-structure. That is, monoclausal predications are analyzed via a simple f-structure whether they consist of periphrastically formed complex predicates (Urdu, Figure 3), a simple verb (English, Figure 4), or a morphologically derived form (Turkish, Figure 5). In Urdu and in Turkish, the top-level PRED is complex, indicating a composed predicate. In Urdu, this reflects the noun-verb complex predicate sTArT kar ‘start do’, in Turkish it reflects a morphological causative. Despite this morphosyntactic complexity, the overall dependency structure corresponds to that of the English simple verb. (4) ù ïf A KQ » HPA J ƒ ñ » Q º K QK Pñ J K @PX DrAIvar TrEkTar=kO driver.M.Sg.Nom tractor.M.Sg=Acc sTArT kartA hE start.M.Sg do.Impf.M.Sg be.Pres.3Sg ‘The driver starts the tractor.’ (5) s¨ur¨uc¨u trakt¨or-¨u c¸alıs¸-tır-ıyor driver.Nom tractor-Acc work-Caus-Prog.3Sg ‘The driver starts the tractor.’ The f-structure analysis of complex predicates is thus similar to that of languages which do not use complex predicates, resulting in a strong syntactic parallelism at this level, even across typologically diverse languages. 5.2 Negation Negation also has varying morphosyntactic surface realizations. The languages in ParGramBank differ with respect to their negation strategies. Languages such as English and German use independent negation: they negate using words such as Figure 3: Complex predicate: Urdu analysis of (4) Figure 4: Simple predicate: English analysis of (4) adverbs (English not, German nicht) or verbs (English do-support). Other languages employ nonindependent, morphological negation techniques; Turkish, for instance, uses an affix on the verb, as in (6). 555 Figure 5: Causative: Turkish analysis of (5) (6) d¨u˘gme-ye bas-ma button-Dat push-Neg.Imp ‘Don’t push the button.’ Within ParGram we have not abstracted away from this surface difference. The English not in (6) functions as an adverbial adjunct that modifies the main verb (see top part of Figure 6) and information would be lost if this were not represented at f-structure. However, the same cannot be said of the negative affix in Turkish — the morphological affix is not an adverbial adjunct. We have therefore currently analyzed morphological negation as adding a feature to the f-structure which marks the clause as negative, see bottom half of Figure 6. 5.3 Copula Constructions Another challenge to parallelism comes from copula constructions. An approach advocating a uniform treatment of copulas crosslinguistically was advocated in the early years of ParGram (Butt et al., 1999b), but this analysis could not do justice to the typological variation found with copulas. ParGramBank reflects the typological difference with three different analyses, with each language making a language-specific choice among the three possibilities that have been identified (Dalrymple et al., 2004; Nordlinger and Sadler, 2007; Attia, 2008; Sulger, 2011; Laczk´o, 2012). The possible analyses are demonstrated here with respect to the sentence The tractor is red. The English grammar (Figure 7) uses a raising approach that reflects the earliest treatments of copulas in LFG (Bresnan, 1982). The copula takes a non-finite complement whose subject is raised to the matrix clause as a non-thematic subject of the copula. In contrast, in Urdu (Figure 8), the Figure 6: Different f-structural analyses for negation (English vs. Turkish) copula is a two-place predicate, assigning SUBJ and PREDLINK functions. The PREDLINK function is interpreted as predicating something about the subject. Finally, in languages like Indonesian (Figure 9), there is no overt copula and the adjective is the main predicational element of the clause. Figure 7: English copula example 556 Figure 8: Urdu copula example Figure 9: Indonesian copula example 5.4 Summary This section discussed some challenges for maintaining parallel analyses across typologically diverse languages. Another challenge we face is when no corresponding construction exists in a language, e.g. with impersonals as in the English It is raining. In this case, we provide a translation and an analysis of the structure of the corresponding translation, but note that the phenomenon being exemplified does not actually exist in the language. A further extension to the capabilities of the treebank could be the addition of pointers from the alternative structure used in the translation to the parallel aligned set of sentences that correspond to this alternative structure. 6 Linguistically Motivated Alignment The treebank is automatically aligned on the sentence level, the top level of alignment within ParGramBank. For phrase-level alignments, we use the drag-and-drop alignment tool in the LFG Parsebanker (Dyvik et al., 2009). The tool allows the alignment of f-structures by dragging the index of a subsidiary source f-structure onto the index of the corresponding target f-structure. Two fstructures correspond if they have translationally matching predicates, and the arguments of each predicate correspond to an argument or adjunct in the other f-structure. The tool automatically computes the alignment of c-structure nodes on the basis of the manually aligned corresponding fstructures.7 7Currently we have not measured inter-annotator agreement (IAA) for the f-structure alignments. The f-structure alignments were done by only one person per language pair. We anticipate that multiple annotators will be needed for this This method is possible because the c-structure to f-structure correspondence (the φ relation) is encoded in the ParGramBank structures, allowing the LFG Parsebanker tool to compute which cstructure nodes contributed to a given f-structure via the inverse (φ−1) mapping. A set of nodes mapping to the same f-structure is called a ‘functional domain’. Within a source and a target functional domain, two nodes are automatically aligned only if they dominate corresponding word forms. In Figure 10 the nodes in each functional domain in the trees are connected by whole lines while dotted lines connect different functional domains. Within a functional domain, thick whole lines connect the nodes that share alignment; for simplicity the alignment is only indicated for the top nodes. The automatically computed c-structural alignments are shown by the curved lines. The alignment information is stored as an additional layer and can be used to explore alignments at the string (word), phrase (c)structure, and functional (f-)structure levels. We have so far aligned the treebank pairs English-Urdu, English-German, English-Polish and Norwegian-Georgian. As Figure 10 illustrates for (7) in an English-Urdu pairing, the English object neighbor is aligned with the Urdu indirect object (OBJ-GO) hamsAyA ‘neighbor’, while the English indirect object (OBJ-TH) tractor is aligned with the Urdu object TrEkTar ‘tractor’. The cstructure correspondences were computed automatically from the f-structure alignments. (7) AK X QºK QK A K@QK ñ» ú G A‚Òïf ú æK@ ú G àA‚» kisAn=nE apnE farmer.M.Sg=Erg self.Obl hamsAyE=kO purAnA neighbor.M.Sg.Obl=Acc old.M.Sg TrEkTar di-yA tractor.M.Sg give-Perf.M.Sg ‘The farmer gave his neighbor an old tractor.’ The INESS platform additionally allows for the highlighting of connected nodes via a mouse-over technique. It thus provides a powerful and flexible tool for the semi-automatic alignment and subsetask in the future, in which case we will measure IAA for this step. 557 Figure 10: Phrase-aligned treebank example English-Urdu: The farmer gave his neighbor an old tractor. quent inspection of parallel treebanks which contain highly complex linguistic structures.8 7 Discussion and Future Work We have discussed the construction of ParGramBank, a parallel treebank for ten typologically different languages. The analyses in ParGramBank are the output of computational LFG ParGram grammars. As a result of ParGram’s centrally agreed upon feature sets and prototypical analyses, the representations are not only deep in nature, but maximally parallel. The representations offer information about dependency relations as well as word order, constituency and part-ofspeech. In future ParGramBank releases, we will provide more theory-neutral dependencies along with the LFG representations. This will take the form of triples (King et al., 2003). We also plan to provide a POS-tagged and a named entity marked up version of the sentences; these will be of use for more general NLP applications and for systems which use such markup as input to deeper processing. 8One reviewer inquires about possibilities of linking (semi-)automatically between languages, for example using lexical resources such as WordNets or Panlex. We agree that this would be desirable, but unrealizable, since many of the languages included in ParGramBank do not have a WordNet resource and are not likely to achieve an adequate one soon. Third, the treebank will be expanded to include 100 more sentences within the next year. We also plan to include more languages as other ParGram groups contribute structures to ParGramBank. ParGramBank, including its multilingual sentences and all annotations, is made freely available for research and commercial use under the CC-BY 3.0 license via the INESS platform, which supports alignment methodology developed in the XPAR project and provides search and visualization methods for parallel treebanks. We encourage the computational linguistics community to contribute further layers of annotation, including semantic (Crouch and King, 2006), abstract knowledge representational (Bobrow et al., 2007), PropBank (Palmer et al., 2005), or TimeBank (Mani and Pustejovsky, 2004) annotations. References Mohammed Attia. 2008. A Unified Analysis of Copula Constructions. In Proceedings of the LFG ’08 Conference, pages 89–108. CSLI Publications. Emily M. Bender, Dan Flickinger, and Stephan Oepen. 2011. Grammar Engineering and Linguistic Hypothesis Testing: Computational Support for Complexity in Syntactic Analysis. In Emily M. Bender and Jennifer E. Arnold, editors, Languages from a Cognitive Perspective: Grammar, Usage and Processing, pages 5–30. CSLI Publications. 558 Daniel G. Bobrow, Cleo Condoravdi, Dick Crouch, Valeria de Paiva, Lauri Karttunen, Tracy Holloway King, Rowan Nairn, Lottie Price, and Annie Zaenen. 2007. Precision-focused Textual Inference. In Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing. Cristina Bosco, Manuela Sanguinetti, and Leonardo Lesmo. 2012. The Parallel-TUT: a multilingual and multiformat treebank. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC-2012), pages 1932–1938, Istanbul, Turkey. European Language Resources Association (ELRA). Joan Bresnan. 1982. The Passive in Lexical Theory. In Joan Bresnan, editor, The Mental Representation of Grammatical Relations, pages 3–86. The MIT Press. Joan Bresnan. 2001. Lexical-Functional Syntax. Blackwell Publishing. Miriam Butt, Stefanie Dipper, Anette Frank, and Tracy Holloway King. 1999a. Writing LargeScale Parallel Grammars for English, French and German. In Proceedings of the LFG99 Conference. CSLI Publications. Miriam Butt, Tracy Holloway King, Mar´ıa-Eugenia Ni˜no, and Fr´ed´erique Segond. 1999b. A Grammar Writer’s Cookbook. CSLI Publications. Miriam Butt, Helge Dyvik, Tracy Holloway King, Hiroshi Masuichi, and Christian Rohrer. 2002. The Parallel Grammar Project. In Proceedings of the COLING-2002 Workshop on Grammar Engineering and Evaluation, pages 1–7. Miriam Butt. 1995. The Structure of Complex Predicates in Urdu. CSLI Publications. Noam Chomsky. 1988. Lectures on Government and Binding: The Pisa Lectures. Foris Publications. Noam Chomsky. 1995. The Minimalist Program. MIT Press. Dick Crouch and Tracy Holloway King. 2006. Semantics via F-structure Rewriting. In Proceedings of the LFG06 Conference, pages 145–165. CSLI Publications. Dick Crouch, Tracy Holloway King, John T. Maxwell III, Stefan Riezler, and Annie Zaenen. 2004. Exploiting F-structure Input for Sentence Condensation. In Proceedings of the LFG04 Conference, pages 167–187. CSLI Publications. Dick Crouch, Mary Dalrymple, Ronald M. Kaplan, Tracy Holloway King, John T. Maxwell III, and Paula Newman, 2012. XLE Documentation. Palo Alto Research Center. Mary Dalrymple, Helge Dyvik, and Tracy Holloway King. 2004. Copular Complements: Closed or Open? In Proceedings of the LFG ’04 Conference, pages 188–198. CSLI Publications. Mary Dalrymple. 2001. Lexical Functional Grammar, volume 34 of Syntax and Semantics. Academic Press. Helge Dyvik, Paul Meurer, Victoria Ros´en, and Koenraad De Smedt. 2009. Linguistically Motivated Parallel Parsebanks. In Proceedings of the Eighth International Workshop on Treebanks and Linguistic Theories (TLT8), pages 71–82, Milan, Italy. EDUCatt. Dan Flickinger, Valia Kordoni, Yi Zhang, Ant´onio Branco, Kiril Simov, Petya Osenova, Catarina Carvalheiro, Francisco Costa, and S´ergio Castro. 2012. ParDeepBank: Multiple Parallel Deep Treebanking. In Proceedings of the 11th International Workshop on Treebanks and Linguistic Theories (TLT11), pages 97–107, Lisbon. Edic¸˜oes Colibri. Tracy Holloway King, Richard Crouch, Stefan Riezler, Mary Dalrymple, and Ronald Kaplan. 2003. The PARC700 Dependency Bank. In Proceedings of the EACL03: 4th International Workshop on Linguistically Interpreted Corpora (LINC-03). Tracy Holloway King, Martin Forst, Jonas Kuhn, and Miriam Butt. 2005. The Feature Space in Parallel Grammar Writing. In Emily M. Bender, Dan Flickinger, Frederik Fouvry, and Melanie Siegel, editors, Research on Language and Computation: Special Issue on Shared Representation in Multilingual Grammar Engineering, volume 3, pages 139–163. Springer. Natalia Klyueva and David Mare˘cek. 2010. Towards a Parallel Czech-Russian Dependency Treebank. In Proceedings of the Workshop on Annotation and Exploitation of Parallel Corpora, Tartu. Northern European Association for Language Technology (NEALT). Jonas Kuhn and Michael Jellinghaus. 2006. Multilingual Parallel Treebanking: A Lean and Flexible Approach. In Proceedings of the LREC 2006, Genoa, Italy. ELRA/ELDA. Tibor Laczk´o. 2012. On the (Un)Bearable Lightness of Being an LFG Style Copula in Hungarian. In Proceedings of the LFG12 Conference, pages 341–361. CSLI Publications. Sabine Lehmann, Stephan Oepen, Sylvie RegnierProst, Klaus Netter, Veronika Lux, Judith Klein, Kirsten Falkedal, Frederik Fouvry, Dominique Estival, Eva Dauphin, Herv´e Compagnion, Judith Baur, Lorna Balkan, and Doug Arnold. 1996. TSNLP — Test Suites for Natural Language Processing. In Proceedings of COLING, pages 711 – 716. Muhammad Kamran Malik, Tafseer Ahmed, Sebastian Sulger, Tina B¨ogel, Atif Gulzar, Ghulam Raza, Sarmad Hussain, and Miriam Butt. 2010. Transliterating Urdu for a Broad-Coverage Urdu/Hindi LFG Grammar. In Proceedings of the Seventh Conference on International Language Resources and Evaluation (LREC 2010), Valletta, Malta. 559 Inderjeet Mani and James Pustejovsky. 2004. Temporal Discourse Models for Narrative Structure. In Proceedings of the 2004 ACL Workshop on Discourse Annotation, pages 57–64. Be´ata Megyesi, Bengt Dahlqvist, ´Eva ´A. Csat´o, and Joakim Nivre. 2010. The English-Swedish-Turkish Parallel Treebank. In Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC’10), Valletta, Malta. European Language Resources Association (ELRA). Tahira Naseem, Regina Barzilay, and Amir Globerson. 2012. Selective Sharing for Multilingual Dependency Parsing. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 629–637, Jeju Island, Korea, July. Association for Computational Linguistics. Joakim Nivre, Igor Boguslavsky, and Leonid Iomdin. 2008. Parsing the SynTagRus Treebank. In Proceedings of COLING08, pages 641–648. Rachel Nordlinger and Louisa Sadler. 2007. Verbless Clauses: Revealing the Structure within. In Annie Zaenen, Jane Simpson, Tracy Holloway King, Jane Grimshaw, Joan Maling, and Chris Manning, editors, Architectures, Rules and Preferences: A Festschrift for Joan Bresnan, pages 139–160. CSLI Publications. Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The Proposition Bank: An Annotated Corpus of Semantic Roles. Computational Linguistics, 31(1):71–106. Martin Popel and Zdenˇek ˇZabokrtsk´y. 2010. TectoMT: Modular NLP Framework. In Proceedings of the 7th International Conference on Advances in Natural Language Processing (IceTAL 2010), pages 293–304. Victoria Ros´en, Paul Meurer, and Koenraad de Smedt. 2009. LFG Parsebanker: A Toolkit for Building and Searching a Treebank as a Parsed Corpus. In Proceedings of the 7th International Workshop on Treebanks and Linguistic Theories (TLT7), pages 127– 133, Utrecht. LOT. Victoria Ros´en, Koenraad De Smedt, Paul Meurer, and Helge Dyvik. 2012. An Open Infrastructure for Advanced Treebanking. In META-RESEARCH Workshop on Advanced Treebanking at LREC2012, pages 22–29, Istanbul, Turkey. Manuela Sanguinetti and Cristina Bosco. 2011. Building the Multilingual TUT Parallel Treebank. In Proceedings of Recent Advances in Natural Language Processing, pages 19–28. Sebastian Sulger. 2011. A Parallel Analysis of haveType Copular Constructions in have-Less IndoEuropean Languages. In Proceedings of the LFG ’11 Conference. CSLI Publications. Josef van Genabith and Dick Crouch. 1996. Direct and Underspecified Interpretations of LFG f-structures. In Proceedings of the 16th International Conference on Computational Linguistics (COLING-96), volume 1, pages 262–267, Copenhagen, Denmark. Martin Volk, Anne G¨ohring, Torsten Marek, and Yvonne Samuelsson. 2010. SMULTRON (version 3.0) — The Stockholm MULtilingual parallel TReebank. http://www.cl.uzh.ch/research/paralleltreebanks en. html. 560
2013
54
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 561–571, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Identifying Bad Semantic Neighbors for Improving Distributional Thesauri Olivier Ferret CEA, LIST, Vision and Content Engineering Laboratory, Gif-sur-Yvette, F-91191 France. [email protected] Abstract Distributional thesauri are now widely used in a large number of Natural Language Processing tasks. However, they are far from containing only interesting semantic relations. As a consequence, improving such thesaurus is an important issue that is mainly tackled indirectly through the improvement of semantic similarity measures. In this article, we propose a more direct approach focusing on the identification of the neighbors of a thesaurus entry that are not semantically linked to this entry. This identification relies on a discriminative classifier trained from unsupervised selected examples for building a distributional model of the entry in texts. Its bad neighbors are found by applying this classifier to a representative set of occurrences of each of these neighbors. We evaluate the interest of this method for a large set of English nouns with various frequencies. 1 Introduction The work we present in this article focuses on the automatic building of a thesaurus from a corpus. As illustrated by Table 1, such thesaurus gives for each of its entries a list of words, called semantic neighbors, that are supposed to be semantically linked to the entry. Generally, each neighbor is associated with a weight that characterizes the strength of its link with the entry and all the neighbors of an entry are sorted according to the decreasing order of their weight. The term semantic neighbor is very generic and can have two main interpretations according to the kind of semantic relations it is based on: one relies only on paradigmatic relations, such as hypernymy or synonymy, while the other considers syntagmatic relations, called collocation relations by (Halliday and Hasan, 1976) in the context of lexical cohesion or “non-classical relations” by (Morris and Hirst, 2004). The distinction between these two interpretations refers to the distinction between the notions of semantic similarity and semantic relatedness as it was done in (Budanitsky and Hirst, 2006) or in (Zesch and Gurevych, 2010) for instance. However, the limit between these two notions is sometimes hard to find in existing work as terms semantic similarity and semantic relatedness are often used interchangeably. Moreover, semantic similarity is frequently considered as included into semantic relatedness and the two problems are often tackled by using the same methods. In the remainder of this article, we will use the term semantic similarity with its generic sense and the term semantic relatedness for referring more specifically to similarity based on syntagmatic relations. Following work such as (Grefenstette, 1994), a widespread way to build a thesaurus from a corpus is to use a semantic similarity measure for extracting the semantic neighbors of the entries of the thesaurus. Three main ways of implementing such measures can be distinguished. The first one relies on handcrafted resources in which semantic relations are clearly identified. Work based on WordNet-like lexical networks for building semantic similarity measures such as (Budanitsky and Hirst, 2006) or (Pedersen et al., 2004) falls into this category. These measures typically exploit the hierarchical structure of these networks, based on hypernymy relations. The second approach makes use of a less structured source of knowledge about words such as the definitions of classical dictionaries or the glosses of WordNet. WordNet’s glosses were used to support Lesklike measures in (Banerjee and Pedersen, 2003) and more recently, measures were also defined from Wikipedia or Wiktionaries (Gabrilovich and 561 Markovitch, 2007). The last option is the corpusbased approach, based on the distributional hypothesis (Firth, 1957): each word is characterized by the set of contexts from a corpus in which it appears and the semantic similarity of two words is computed from the contexts they share. This perspective was first adopted by (Grefenstette, 1994) and (Lin, 1998) and then, explored in details in (Curran and Moens, 2002b), (Weeds, 2003) or (Heylen et al., 2008). The problem of improving the results of the “classical” implementation of the distributional approach as it can be found in (Curran and Moens, 2002a) for instance was already tackled by some work. A part of these proposals focus on the weighting of the elements that are part of the contexts of words such as (Broda et al., 2009), in which the weights of context elements are turned into ranks, or (Zhitomirsky-Geffet and Dagan, 2009), followed and extended by (Yamamoto and Asakura, 2010), that proposes a bootstrapping method for modifying the weights of context elements according to the semantic neighbors found by an initial distributional similarity measure. However, another part of these proposals implies more radical changes. The use of dimensionality reduction techniques, for instance Latent Semantic Analysis in (Pad´o and Lapata, 2007), the multi-prototype (Reisinger and Mooney, 2010) or examplar-based models (Erk and Pado, 2010), the Deep Learning approach of (Huang et al., 2012) or the redefinition of the distributional approach in a Bayesian framework (Kazama et al., 2010) can be classified into this second category. The work we present in this article takes place in the framework defined by (Grefenstette, 1994) for implementing the distributional approach but proposes a new method for improving a thesaurus built in this context based on the identification of its bad semantic neighbors rather than on the adaptation of the weight of their features. 2 Principles Our work shares with (Zhitomirsky-Geffet and Dagan, 2009) the use of a kind of bootstrapping as it starts from a distributional thesaurus and to some extent, exploits it for its improvement. However, it adopts a more indirect approach: instead of selecting the “best” semantic neighbors of an entry in the thesaurus for adapting the weights of distributional context elements, it focuses on the detection of its bad semantic neighbors, that is to say the neighbors of the entry that are actually not semantically similar to the entry. In Table 1, waterworks for the entry cabdriver and hollowness for the entry machination are two examples of such kind of neighbors. By discarding these bad neighbors or at least by downgrading them, the rank of true semantic neighbors is expected to be lower. This makes the thesaurus more interesting to use since the quality of such thesaurus strongly decreases as the rank of the neighbors of its entries increases (see Section 4.1 for an illustration), which means in practice that only the first neighbors of an entry can be generally exploited. The approach we propose for identifying the bad semantic neighbors of a thesaurus entry relies on the distributional hypothesis, as the method for the initial building of the thesaurus, but implements it in a different way. This hypothesis roughly specifies that from a semantic viewpoint, the meaning of a word can be characterized by the set of contexts in which this word occurs. As a consequence, two words are considered as semantically similar if they occur in a large enough set of shared contexts. In work such as (Curran and Moens, 2002a), this hypothesis is implemented by collecting for each entry the words it co-occurs with in a large corpus. This co-occurrence can be based either on the position of the word in the text in relation to the entry or on the presence of a syntactic relation between the entry and the word. As a result, the distributional representation of a word takes the unstructured form of a bag of words or the more structured form of a set of pairs {syntactic relation, word}. A variant of this approach was proposed in (Kazama et al., 2010) where the distributional representation of a word is modeled as a multinomial distribution with Dirichlet as prior. However, this approach globally faces a certain lack of diversity and complexity of the features of its models. For instance, features such as ngrams of words or ngrams of parts of speech are not considered whereas they are widely used in tasks such as word sense disambiguation (WSD) for instance, probably because they would lead to very large models and because similarity measures such as the Cosine measure are not necessarily suitable for heterogeneous representations (Alexandrescu and Kirchhoff, 2007). Hence, we propose in this article to build a discriminative model for repre562 abnormality defect [0.30], disorder [0.23], deformity [0.22], mutation [0.21], prolapse [0.21], anomaly [0.21] . . . agreement accord [0.44], deal [0.41], pact [0.38], treaty [0.36], negotiation [0.35], proposal [0.32], arrangement [0.30] . . . cabdriver waterworks [0.23], toolmaker [0.22], weaponeer [0.17], valkyry [0.17], wang [0.17], amusement-park [0.17] . . . machination hollowness [0.15], share-price [0.12], clockmaker [0.12], huguenot [0.12], wrangling [0.12], alternation [0.12] . . . Table 1: First neighbors of some entries of the distributional thesaurus of section 3.2 senting the contexts of a word since this kind of models are known to integrate easily a wide set of different types of features. This model aims more precisely at discriminating from a semantic viewpoint a word in context, i.e. in a sentence, from all other words and more particularly, from those of its neighbors in a distributional thesaurus that are likely to be actually not semantically similar to it. The underlying hypothesis follows the distributional principles: a word and a synonym should appear in the same contexts, which means that they are characterized by the same features. As a consequence, a model based on these features that can identify a word in a sentence is likely to identify also a synonym of this word in a sentence, and by extension, to identify a word that is paradigmatically linked to it. More precisely, we found that such model is specifically effective for discarding the bad neighbors of the entries of a distributional thesaurus. 3 Improving a distributional thesaurus 3.1 Overview The principles presented in the previous section face one major problem compared to the “classical” distributional approach : the semantic similarity of two words can be evaluated directly by computing the similarity of their distributional representations. However, in our case, since this representation is a discriminative model, the similarity of two words can not be evaluated through the direct comparison of their models. These models have to be applied to words in context for being exploited. As a consequence, for deciding whether a neighbor of a thesaurus entry is a bad neighbor or not, the discriminative model of the entry has to be applied to occurrences of this neighbor in texts. Hence, the method we propose for improving a distributional thesaurus applies the following process to each of its entries: • building of a classifier for determining whether a word in a sentence corresponds or not to the entry; • selection of a set of examples sentences for each of the neighbors of the entry in the thesaurus; • application of the classifier to these sentences; • identification of bad neighbors according to the results of the classifier; • reranking of entry’s neighbors according to bad neighbors. 3.2 Building of the initial thesaurus Before introducing our method for improving distributional thesauri, we first present the way we build such a thesaurus. As in (Lin, 1998) or (Curran and Moens, 2002a), this building is based on the definition of a semantic similarity measure from a corpus. The corpus used for defining this measure was the AQUAINT-2 corpus, a middlesize corpus made of around 380 million words coming from news articles. Although our target language is English, we chose to limit deliberately the level of the tools applied for preprocessing texts to part-of-speech tagging and lemmatization to make possible the transposition of our method to a large set of languages. This seems to be a reasonable compromise between the approach of (Freitag et al., 2005), in which none normalization of words is done, and the more widespread use of syntactic parsers in work such as (Lin, 1998). More precisely, we used TreeTagger (Schmid, 1994) for performing the linguistic preprocessing of the AQUAINT-2 corpus. For the extraction of distributional data and the characteristics of the distributional similarity measure, we adopted the options of (Ferret, 2010), resulting from a kind of grid search procedure performed with the extended TOEFL test proposed in (Freitag et al., 2005) as an optimization objective. More precisely, the following characteristics were taken: • distributional contexts made of the cooccurrents collected in a 3 word window centered on each occurrence in the corpus of the target word. These co-occurrents were restricted to nouns, verbs and adjectives; • soft filtering of contexts: removal of cooccurrents with only one occurrence; • weighting function of co-occurrents in con563 texts = Pointwise Mutual Information (PMI) between the target word and the co-occurrent; • similarity measure between contexts, for evaluating the semantic similarity of two words = Cosine measure. The building of our initial thesaurus from the similarity measure above was performed classically by extracting the closest semantic neighbors of each of its entries. More precisely, the selected measure was computed between each entry and its possible neighbors. These neighbors were then ranked in the decreasing order of the values of this measure and the first 100 neighbors were kept as the semantic neighbors of the entry. Both entries and possible neighbors were AQUAINT-2 nouns whose frequency was higher than 10. 3.3 Building a discriminative model of words in context As mentioned in section 3.1, the starting point of our reranking process is the definition of a model for determining to what extent a word in a sentence, which is not supposed to be known in the context of this task, corresponds or not to a reference word E. This task can also be viewed as a tagging task in which the occurrences of a target word T are labeled with two tags: E and notE. In the context of our global objective, we are not of course interested by this task itself but rather by the fact that such classifier is likely to model the contexts in which E occurs and as a consequence, is also likely to model its meaning according to the distributional hypothesis. A step further, such classifier can be viewed as a means for testing whether or not a word has the same meaning as E. This is a problem close to WSD as it is performed in the context of the pseudo-word disambiguation paradigm (Gale et al., 1992): a pseudo-word is created with two senses, E and notE, notE corresponding to one or several words that are supposed to be representative of a meaning different from the meaning of E. The objective is then to build a classifier for distinguishing the pseudo-senses E and notE. As a consequence of this view, we adopt the same kind of features as the ones used for WSD for building our classifier. More precisely, we follow (Lee and Ng, 2002), a reference work for WSD, by adopting a Support Vector Machines (SVM) classifier with a linear kernel and three kinds of features for characterizing each considered occurrence in a text of the reference word E: • neighboring words; • Part-of-Speech (POS) of neighboring words; • local collocations. Only features based on syntactic relations are not taken from (Lee and Ng, 2002) since their use would have not been coherent with the window based approach of the building of our initial thesaurus. For the neighboring words features, we consider all plain words (common and proper nouns, verbs and adjectives) and adverbs that are present in the same sentence of an occurrence of E. Each neighboring word is represented under its lemma form as a binary feature whose value is equal to 1 when it is present in the same sentence as E. For the second type of features, we take more precisely the POS of the three words before E and those of the three words after E. Each pair {POS, position} corresponds to a binary feature for the SVM classifier. A special empty symbol is used for the POS when the position goes beyond the end or the beginning of the current sentence. Since we analyze texts with TreeTagger, the tagset is very close to the set of Penn Treebank tags. Finally, the local collocations features correspond to pairs of words, named collocations, in the neighborhood of E. A collocation is specified by the notation Ci,j, with i and j referring to the position of the first and the second word of the collocation. In our case, i and j take their values in the interval [−3, +3], similarly to POS. More precisely, the following 11 types of collocations are extracted for each occurrence of E: C−1,−1, C1,1, C−2,−2, C2,2 C−2,−1, C−1,1, C1,2, C−3,−1, C−2,1, C−1,2 and C1,3. As for POS, a special empty symbol stands for words beyond the end or the beginning of the sentence and similarly to neighboring words features, words in collocations are given under their lemma form. Each instance of the 11 types of collocations is represented by a tuple ⟨lemma1, position1, lemma2, position2⟩and leads to a binary feature for the SVM classifier. In accordance with the process of section 3.1, a specific SVM classifier is trained for each entry of our initial thesaurus, which requires the unsupervised selection of a set of positive and negative examples. The case of positive examples is simple: a fixed number of sentences containing at least one occurrence of the target entry are randomly chosen in the corpus used for building our 564 initial thesaurus and the first occurrence of this entry in the sentence is taken as a positive example. Since we want to characterize words as much as possible from a semantic viewpoint, the selection of negative examples is guided by our initial thesaurus. Choosing a neighbor of the entry with a high rank would guarantee in principle few false negative examples, that is to say words1 which are semantically similar to the entry, since the number of such neighbors strongly decreases as the rank of neighbors increases as we will illustrate it in section 4.1. In practice, taking neighbors with a rather small rank as negative examples is a better option because these examples are more useful in terms of discrimination as they are close to the transition zone between negative and positive examples. Moreover, in order to limit the risk of selecting only false negative examples, three neighbors are taken as negative examples, at ranks 10, 15 and 202. For each of these negative examples, a fixed number of sentences is selected following the same principles as for positive examples, which means that on average, the number of negative examples is equal to three times the number of positive examples. This ratio reflects the fact that among the neighbors of an entry, the number of those that are semantically similar to the entry is far lower than the number of those that are not. 3.4 Identification of bad neighbors and thesaurus reranking Once a word-in-context classifier was trained for an entry, it is used for identifying the bad neighbors of this entry, that is to say the neighbors that are not semantically similar to it. As this classifier can only be applied to words in context, a fixed number of representative occurrences have to be selected from our reference corpus for each neighbor of the entry. This selection is performed similarly to the selection of positive and negative examples in the previous section. The application of our word-in-context classifier to each of these occurrences determines whether the context of this occurrence is likely to be compatible with the context of an occurrence of the entry. In practice, the decision of the classifier is rarely 1More precisely, an example here is an occurrence of a word in a text but by extension, we also use the term example for referring to the word itself. 2It should be noted that these ranks come from the evaluation of section 4.1 but their choice is not the result of an optimization process. positive, which is not surprising: even if two words are semantically equivalent, each one is characterized by specific usages, especially in a given corpus, and some features of our classifier, such as the collocation features, are more likely to capture such specificities than the unigrams of “classical” distributional contexts. As a consequence, we consider that a positive outcome of our classifier is a significant hint about the presence of a word that is semantically similar to the entry and we keep a neighbor as a “good” neighbor if at least a fixed number G of its occurrences, among those selected as reference, are tagged positively by our word-in-context classifier. Conversely, a neighbor is defined as “bad” if the number of its reference occurrences tagged positively by our classifier is lower or equal to G. The neighbors of an entry identified as bad neighbors are not fully discarded. They are rather downgraded to the end of the list of neighbors. Among the downgraded neighbors, their initial order is left unchanged. It should be noted that the word-in-context classifier is not applied to the neighbors whose occurrences are used for its training as it would frequently lead to downgrade these neighbors, which is not necessarily optimum as we chose them with a rather low rank. 4 Experiments and evaluation 4.1 Initial thesaurus evaluation Table 2 shows the results of the evaluation of our initial thesaurus, achieved by comparing the selected semantic neighbors with two complementary reference resources: WordNet 3.0 synonyms (Miller, 1990) [W], which characterize a semantic similarity based on paradigmatic relations, and the Moby thesaurus (Ward, 1996) [M], which gathers a larger set of types of relations and is more representative of semantic relatedness3. The fourth column of Table 2, which gives the average number of synonyms and similar words in our references for the AQUAINT-2 nouns, also illustrates the difference of these two resources in terms of richness. A fusion of the two resources is also considered [WM]. As our objective is to evaluate the extracted semantic neighbors and not the ability to rebuild the reference resources, these re3The Moby thesaurus includes more precisely both paradigmatic and syntactic relations but we will sometimes use the term synonym as a shortcut for referring to all the words associated to one of its entries. 565 freq. ref. #eval. words #syn. / word recall R-prec. MAP P@1 P@5 P@10 P@100 W 10,473 2.9 24.6 8.2 9.8 11.7 5.1 3.4 0.7 all M 9,216 50.0 9.5 6.7 3.2 24.1 16.4 13.0 4.8 # 14,670 WM 12,243 38.7 9.8 7.7 5.6 22.5 14.1 10.8 3.8 W 3,690 3.7 28.3 11.1 12.5 17.2 7.7 5.1 1.0 high M 3,732 69.4 11.4 10.2 4.9 41.3 28.0 21.9 7.9 # 4,378 WM 4,164 63.2 11.5 11.0 6.5 41.3 26.8 20.8 7.3 W 3,732 2.6 28.6 10.4 12.5 13.6 5.8 3.7 0.7 middle M 3,306 41.3 9.3 6.5 3.1 18.7 13.1 10.4 3.8 # 5,175 WM 4,392 32.0 9.8 9.3 7.4 20.9 12.3 9.3 3.2 W 3,051 2.3 11.9 2.1 3.3 2.6 1.2 0.9 0.3 low M 2,178 30.1 2.8 1.2 0.5 2.5 1.5 1.5 0.9 # 5,117 WM 3,687 18.9 3.5 2.1 2.4 3.3 1.7 1.5 0.7 Table 2: Evaluation of semantic neighbor extraction sources were filtered to discard entries and synonyms that are not part of the AQUAINT-2 vocabulary (see the difference between the number of words in the first column and the number of evaluated words of the third column). Since the frequency of words is an important factor in distributional approaches, we give our results globally but also for three ranges of frequencies that split our set of nouns into roughly equal parts: high frequency (frequency > 1000), middle frequency (100 < frequency ≤1000) and low frequency (10 < frequency ≤100). These results take the form of several measures and start at the fifth column by the proportion of the synonyms and similar words of our references that are found among the first 100 extracted neighbors of each noun. As these neighbors are ranked according to their similarity value with their target word, the evaluation measures are taken from the Information Retrieval field by replacing documents with synonyms and queries with target words (see the four last columns of Table 2). The R-precision (Rprec.) is the precision after the first R neighbors were retrieved, R being the number of reference synonyms; the Mean Average Precision (MAP) is the average of the precision value after a reference synonym is found; precision at different cut-offs is given for the 1, 5, 10 and 100 first neighbors. All these values are given as percentages. The results of Table 2 lead to three main observations. First, the level of results heavily depends on the frequency range of target words: the best results are obtained for high frequency words while evaluation measures significantly decrease for words whose frequency is low. Second, the characteristics of the reference resources have a significant impact on results. WordNet provides a restricted number of synonyms for each noun while the Moby thesaurus contains for each entry a large number of synonyms and similar words. As a consequence, the precisions at different cut-offs have a significantly higher value with Moby as reference than with WordNet as reference. Finally, the results of Table 2 are compatible with those of (Lin, 1998) for instance (R-prec. = 11.6 and MAP = 8.1 with WM as reference for all entries of the thesaurus at http://webdocs.cs.ualberta. ca/lindek/Downloads/sim.tgz) if we take into account the fact that the thesaurus of Lin was built from a much larger corpus and with syntactic co-occurrences. 4.2 Implementation issues The implementation of the method we have presented in section 3 raises several issues. One of these concerns the occurrences to select from texts of both the entries of the thesaurus and their neighbors. These occurrences are used both for the training of our word-in-context classifier and for the identification of bad neighbors. In practice, we extract randomly from our reference corpus, i.e. the AQUAINT-2 corpus, a fixed number of sentences, equal to 250, for each word of the vocabulary of our initial thesaurus and exploit them for the two tasks. This extraction is performed on the basis of the lemma form of these words. It should be noted that 250 is the upper limit of the number of occurrences by word since the frequency in the corpus of many words is lower than 250. When this limit is not reached, all the available oc566 currences are taken, which may be no more than 11 occurrences for certain low-frequency words. The upper limit of 250 is halfway between the 385 training examples on average for the Lexical Sample Task of Senseval 1 and the 118 training examples on average for the same task of Senseval 2. The training of our word-in-context classifier is also an important issue. As mentioned before, this classifier is a linear SVM. Hence, only its C regularization parameter can be optimized. Since we have one specific classifier for each thesaurus entry, such optimization has globally a high cost, even for a linear kernel. Hence, we have first evaluated through a 5-fold cross-validation method the results of these classifiers with a default value of C, equal to 1. Table 3 gives their average accuracy value along with their standard deviation for all the entries of the thesaurus and for the three frequency ranges of Table 2. all high middle low accuracy 86.2 86.1 86.0 86.5 standard deviation 6.1 4.2 5.7 7.6 Table 3: Results of word-in-context classifiers This table shows a global high level of result along with similar values for all the frequency ranges of entries4. Hence, we have decided not to optimize the C parameter and to adopt the default value of 1 for all the word-in-context classifiers. 0 5 10 15 20 12 13 14 15 16 G threshold MAP (W) R−prec. (W) Figure 1: R-precision and MAP for various values of the G threshold The last and the most important implementation issue is the setting of the threshold G for determining whether a neighbor is likely to be a bad 4The standard deviation is a little bit higher for the lowest frequencies but it should be noted that the low number of examples for low frequency entries does not seem to have a strong impact on the results of such classifier. neighbor. For this setting, we have randomly chosen a subset of 859 entries of our initial thesaurus that corresponds to 10% of the entries with at least one true neighbor in any of our references. Figure 1 gives the results of the reranked thesaurus for these entries in terms of R-precision and MAP against reference W5 for various values of G. Although the level of these measures does not change a lot for G > 5, the graph of Figure 1 shows that G = 15 appears to be an optimal value. Hence, this is the value used for the detailed evaluation of the next section. 4.3 Evaluation of the reranked thesaurus Table 4 gives the evaluation of the application of our reranking method to the initial thesaurus according to the same principles as in section 4.1. The value of each measure comes with its difference with the corresponding value for the initial thesaurus. As the recall measure and the precision for the last rank do not change in a reranking process, they are not given again. The first thing to notice is that at the global scale, all measures for all references are significantly improved6, which means that our hypothesis about the possibility for a discriminative classifier to capture the meaning of a word tends to be validated. It is an interesting result since the features upon which this classifier was built were taken from WSD and were not specifically selected for this task. As a consequence, there is probably some room for improvement. If we go into details, Table 4 clearly shows two main trends. First, the improvement of results is particularly effective for middle frequency entries, then for low frequency and finally, for high frequency entries. Because of their already high level in the initial thesaurus, results for high frequency entries are difficult to improve but it is important to note that our selection of bad neighbors has a very low error rate, which at least preserves these results. This is confirmed by the fact that, with WordNet as reference, only 744 neighbors were found wrongly downgraded, spread over 686 entries, which represents only 5% of all downgraded neighbors. The second main trend of Table 4 con5The use of W as reference is justified by the fact that the number of synonyms for an entry in W is more compatible, especially for R-precision, with the real use of the resulting thesaurus in an application. 6The statistical significance of differences with the initial thesaurus was evaluated by a paired Wilcoxon test with pvalue < 0.05 and < 0.01 († and ‡ for non significance). 567 freq. ref. R-prec. MAP P@1 P@5 P@10 W 9.1 (0.9) 10.7 (0.9) 12.8 (1.1) 5.6 (0.5) 3.7 (0.3) all M 7.2 (0.5) 3.5 (0.3) 26.5 (2.4) 17.9 (1.5) 14.0 (1.0) WM 8.4 (0.7) 6.1 (0.5) 24.8 (2.3) 15.4 (1.3) 11.7 (0.9) W 11.3 (0.2) † 12.6 (0.1) 17.3 (0.1) ‡ 7.8 (0.1) ‡ 5.1 (0.0) high M 10.3 (0.1) 4.9 (0.0) 42.1 (0.8) 28.4 (0.4) 22.1 (0.2) WM 11.1 (0.1) 6.6 (0.1) 42.0 (0.7) 27.2 (0.4) 20.9 (0.1) W 11.8 (1.4) 13.8 (1.3) 15.7 (2.1) 6.5 (0.7) 4.1 (0.4) middle M 7.3 (0.8) 3.6 (0.5) 23.3 (4.6) 16.0 (2.9) 12.4 (2.0) WM 10.3 (1.0) 8.1 (0.7) 25.1 (4.2) 14.6 (2.3) 10.9 (1.6) W 3.2 (1.1) 4.6 (1.3) 3.9 (1.3) 1.8 (0.6) 1.3 (0.4) low M 1.8 (0.6) 0.8 (0.3) 4.4 (1.9) 2.9 (1.4) 2.6 (1.1) WM 3.1 (1.0) 3.3 (0.9) 5.1 (1.8) 2.9 (1.2) 2.3 (0.8) Table 4: Results of the reranking of semantic neighbors cerns the type of semantic relations: results with Moby as reference are improved in a larger extent than results with WordNet as reference. This suggests that our procedure is more effective for semantically related words than for semantically similar words, which can be considered as a little bit surprising since the notion of context in our discriminative classifier seems a priori more strict than in “classical” distributional contexts. However, this point must be investigated further as a significant part of the relations in Moby, even if they do no represent the largest part of them, are paradigmatic relations. WordNet respect, admiration, regard Moby admiration, appreciation, acceptance, dignity, regard, respect, account, adherence, consideration, estimate, estimation, fame, greatness, reverence + 79 words more initial cordiality, gratitude, admiration, comradeship, back-scratching, perplexity, respect, ruination, appreciation, neighbourliness . .. reranking gratitude, admiration, respect, appreciation, neighborliness, trust, empathy, goodwill, reciprocity, half-staff, affection, self-esteem, reverence, longing, regard ... Table 5: Impact of our reranking for the entry esteem Table 5 illustrates more precisely the impact of our reranking procedure for the middle frequency entry esteem. Its WordNet row gives all the reference synonyms for this entry in WordNet while its Moby row gives the first reference related words for this entry in Moby. In our initial thesaurus, the first two neighbors of esteem that are present in our reference resources are admiration (rank 3) and respect (rank 7). The reranking produces a thesaurus in which these two words appear as the second and the third neighbors of the entry because neighbors without clear relation with it such as backscratching were downgraded while its third synonym in WordNet is raised from rank 22 to rank 15. Moreover, the number of neighbors among the first 15 ones that are present in Moby increases from 3 to 5. 5 Related work The building of distributional thesaurus is generally viewed as an application or a mode of evaluation of work about semantic similarity or semantic relatedness. As a consequence, the improvement of such thesaurus is generally not directly addressed but is a possible consequence of the improvement of semantic similarity measures. However, the extent of this improvement is rarely evaluated as most of the work about semantic similarity is evaluated on datasets such as the WordSim-353 test collection (Gabrilovich and Markovitch, 2007), which are only partially representative of the results for thesaurus building. If we consider more specifically the problem of improving semantic similarity, and by the way thesauri, in a given paradigm, (Broda et al., 2009), (Zhitomirsky-Geffet and Dagan, 2009) and (Yamamoto and Asakura, 2010), which all take place in the paradigm defined by (Grefenstette, 1994), are the closest works to ours. (Broda et al., 2009) proposes a new weighting scheme of words in distributional contexts that replaces the weight of 568 word by a function of its rank in the context, which is a way to be less dependent on the values of a particular weighting function. (Zhitomirsky-Geffet and Dagan, 2009) shares with our work the use of bootstrapping by relying on an initial thesaurus to derive means of improving it. More specifically, (Zhitomirsky-Geffet and Dagan, 2009) assumes that the first neighbors of an entry are more relevant than the others and as a consequence, that their most significant features are also representative of the meaning of the entry. The neighbors of the entry are reranked according to this hypothesis by increasing the weight of these features to favor their influence in the distributional contexts that support the evaluation of the similarity between the entry and its neighbors. (Yamamoto and Asakura, 2010) is a variant of (Zhitomirsky-Geffet and Dagan, 2009) that takes into account a larger number of features for the reranking process. One main difference between all these works and ours is that they assume that the initial thesaurus was built by relying on distributional contexts represented as bags-of-words. Our method does not make this assumption as its reranking is based on a classifier built in an unsupervised way7 from and applied to the corpus used for building the initial thesaurus. As a consequence, it could even be applied to other paradigms than (Grefenstette, 1994). If we focus more specifically on the improvement of distributional thesauri, (Ferret, 2012) is the most comparable work to ours, both because it is specifically focused on this task and it is based on the same evaluation framework. (Ferret, 2012) selects in an unsupervised way a set of positive and negative examples of semantically similar words from the initial thesaurus, uses them for training a classifier deciding whether or not a pair of words are semantically similar and finally, applies this classifier to the neighbors of each entry for reranking them. One of the objectives of (Ferret, 2012) was to rebalance the initial thesaurus in favor of low frequency entries. Although this objective was reached, the resulting thesaurus tends to have a lower performance than the initial thesaurus for high frequency entries and for synonyms. The problem with high frequency entries comes from the fact that applying a machine learning classifier to its training examples does not lead to a perfect result. The problem with synonyms 7It is a supervised classifier but its training set is selected in an unsupervised way. arises from the imbalance between semantic similarity and semantic relatedness among training examples: most of selected examples were pairs of words linked by semantic relatedness because this kind of relations are more frequent among semantic neighbors than relations based on semantic similarity. In both cases, the method proposed in (Ferret, 2012) faces the problem of relying only on the distributional thesaurus it tries to improve. This is an important difference with the method presented in this article, which mainly exploits the context of the occurrences of words in the corpus used for the building the initial thesaurus. As a consequence, at a global scale, our reranked thesaurus outperforms the final thesaurus of (Ferret, 2012) for nearly all measures. The only exceptions are the P@1 values for M and WM as reference. However, it should be noted that values for both MAP and R-precision, which are more reliable measures than P@1, are identical for the two thesauri and the same references. 6 Conclusion and perspectives In this article, we have presented a new approach for reranking the semantic neighbors of a distributional thesaurus. This approach relies on the unsupervised building of discriminative classifiers dedicated to the identification of its entries in texts, with the objective to characterize their meaning according to the distributional hypothesis. The classifier built for an entry is then applied to a set of occurrences of its neighbors for identifying and downgrading those that are not semantically related to the entry. The proposed method was tested on a large thesaurus of nouns for English and led to a significant improvement of this thesaurus, especially for middle and low frequency entries and for semantic relatedness. We plan to extend this work by taking into account the notion of word sense as it is done in (Reisinger and Mooney, 2010) or (Huang et al., 2012): since we rely on occurrences of words in texts, this extension should be quite straightforward by turning our word-in-context classifiers into true word sense classifiers. Acknowledgments This work was partly supported by the project ANR ASFALDA ANR-12-CORD-0023. 569 References Andrei Alexandrescu and Katrin Kirchhoff. 2007. Data-driven graph construction for semi-supervised graph-based learning in NLP. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics (NAACL HLT 2007), pages 204– 211, Rochester, New York. Satanjeev Bano Banerjee and Ted Pedersen. 2003. Extended gloss overlaps as a measure of semantic relatedness. In Eighteenth International Conference on Artificial Intelligence (IJCAI-03), Acapulco, Mexico. Bartosz Broda, Maciej Piasecki, and Stan Szpakowicz. 2009. Rank-Based Transformation in Measuring Semantic Relatedness. In 22nd Canadian Conference on Artificial Intelligence, pages 187–190. Alexander Budanitsky and Graeme Hirst. 2006. Evaluating wordnet-based measures of lexical semantic relatedness. Computational Linguistics, 32(1):13– 47. James Curran and Marc Moens. 2002a. Scaling context space. In 40th Annual Meeting of the Association for Computational Linguistics (ACL-02), pages 231–238, Philadelphia, Pennsylvania, USA. James R. Curran and Marc Moens. 2002b. Improvements in automatic thesaurus extraction. In Workshop of the ACL Special Interest Group on the Lexicon (SIGLEX), pages 59–66, Philadelphia, USA. Katrin Erk and Sebastian Pado. 2010. Exemplar-based models for word meaning in context. In 48th Annual Meeting of the Association for Computational Linguistics (ACL 2010), short paper, pages 92–97, Uppsala, Sweden, July. Olivier Ferret. 2010. Testing semantic similarity measures for extracting synonyms from a corpus. In Seventh conference on International Language Resources and Evaluation (LREC’10), Valletta, Malta. Olivier Ferret. 2012. Combining bootstrapping and feature selection for improving a distributional thesaurus. In 20th European Conference on Artificial Intelligence (ECAI 2012), pages 336–341, Montpellier, France. John R. Firth, 1957. Studies in Linguistic Analysis, chapter A synopsis of linguistic theory 1930-1955, pages 1–32. Blackwell, Oxford. Dayne Freitag, Matthias Blume, John Byrnes, Edmond Chow, Sadik Kapadia, Richard Rohwer, and Zhiqiang Wang. 2005. New experiments in distributional representations of synonymy. In Ninth Conference on Computational Natural Language Learning (CoNLL), pages 25–32, Ann Arbor, Michigan, USA. Evgeniy Gabrilovich and Shaul Markovitch. 2007. Computing semantic relatedness using wikipediabased explicit semantic analysis. In 20th International Joint Conference on Artificial Intelligence (IJCAI 2007), pages 6–12. William A Gale, Kenneth W Church, and David Yarowsky. 1992. Work on statistical methods for word sense disambiguation. In AAAI Fall Symposium on Probabilistic Approaches to Natural Language, pages 54–60. Gregory Grefenstette. 1994. Explorations in automatic thesaurus discovery. Kluwer Academic Publishers. Michael A. K. Halliday and Ruqaiya Hasan. 1976. Cohesion in English. Longman, London. Kris Heylen, Yves Peirsmany, Dirk Geeraerts, and Dirk Speelman. 2008. Modelling Word Similarity: An Evaluation of Automatic Synonymy Extraction Algorithms. In Sixth conference on International Language Resources and Evaluation (LREC 2008), Marrakech, Morocco. Eric H. Huang, Richard Socher, Christopher D. Manning, and Andrew Y. Ng. 2012. Improving word representations via global context and multiple word prototypes. In 50th Annual Meeting of the Association for Computational Linguistics (ACL’12), pages 873–882. Jun’ichi Kazama, Stijn De Saeger, Kow Kuroda, Masaki Murata, and Kentaro Torisawa. 2010. A bayesian method for robust estimation of distributional similarities. In 48th Annual Meeting of the Association for Computational Linguistics, pages 247–256, Uppsala, Sweden. Yoong Keok Lee and Hwee Tou Ng. 2002. An empirical evaluation of knowledge sources and learning algorithms for word sense disambiguation. In 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP 2002), pages 41–48. Dekang Lin. 1998. Automatic retrieval and clustering of similar words. In 17th International Conference on Computational Linguistics and 36th Annual Meeting of the Association for Computational Linguistics (ACL-COLING’98), pages 768–774, Montral, Canada. George A. Miller. 1990. WordNet: An On-Line Lexical Database. International Journal of Lexicography, 3(4). Jane Morris and Graeme Hirst. 2004. Nonclassical lexical semantic relations. In Workshop on Computational Lexical Semantics of Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pages 46–51, Boston, MA. Sebastian Pad´o and Mirella Lapata. 2007. Dependency-based construction of semantic space models. Computational Linguistics, 33(2):161–199. 570 Ted Pedersen, Siddharth Patwardhan, and Jason Michelizzi. 2004. Wordnet::similarity - measuring the relatedness of concepts. In HLT-NAACL 2004, demonstration papers, pages 38–41, Boston, Massachusetts, USA. Joseph Reisinger and Raymond J. Mooney. 2010. Multi-prototype vector-space models of word meaning. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL 2010), pages 109–117, Los Angeles, California, June. Helmut Schmid. 1994. Probabilistic part-of-speech tagging using decision trees. In International Conference on New Methods in Language Processing. Grady Ward. 1996. Moby thesaurus. Moby Project. Julie Weeds. 2003. Measures and Applications of Lexical Distributional Similarity. Ph.D. thesis, Department of Informatics, University of Sussex. Kazuhide Yamamoto and Takeshi Asakura. 2010. Even unassociated features can improve lexical distributional similarity. In Second Workshop on NLP Challenges in the Information Explosion Era (NLPIX 2010), pages 32–39, Beijing, China. Torsten Zesch and Iryna Gurevych. 2010. Wisdom of crowds versus wisdom of linguists - measuring the semantic relatdness of words. Natural Language Engineering, 16(1):25–59. Maayan Zhitomirsky-Geffet and Ido Dagan. 2009. Bootstrapping Distributional Feature Vector Quality. Computational Linguistics, 35(3):435–461. 571
2013
55
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 572–582, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Models of Semantic Representation with Visual Attributes Carina Silberer, Vittorio Ferrari, Mirella Lapata School of Informatics, University of Edinburgh 10 Crichton Street, Edinburgh EH8 9AB [email protected], [email protected], [email protected] Abstract We consider the problem of grounding the meaning of words in the physical world and focus on the visual modality which we represent by visual attributes. We create a new large-scale taxonomy of visual attributes covering more than 500 concepts and their corresponding 688K images. We use this dataset to train attribute classifiers and integrate their predictions with text-based distributional models of word meaning. We show that these bimodal models give a better fit to human word association data compared to amodal models and word representations based on handcrafted norming data. 1 Introduction Recent years have seen increased interest in grounded language acquisition, where the goal is to extract representations of the meaning of natural language tied to the physical world. The language grounding problem has assumed several guises in the literature such as semantic parsing (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Kate and Mooney, 2007; Lu et al., 2008; B¨orschinger et al., 2011), mapping natural language instructions to executable actions (Branavan et al., 2009; Tellex et al., 2011), associating simplified language to perceptual data such as images or video (Siskind, 2001; Roy and Pentland, 2002; Gorniak and Roy, 2004; Yu and Ballard, 2007), and learning the meaning of words based on linguistic and perceptual input (Bruni et al., 2012b; Feng and Lapata, 2010; Johns and Jones, 2012; Andrews et al., 2009; Silberer and Lapata, 2012). In this paper we are concerned with the latter task, namely constructing perceptually grounded distributional models. The motivation for models that do not learn exclusively from text is twofold. From a cognitive perspective, there is mounting experimental evidence suggesting that our interaction with the physical world plays an important role in the way we process language (Barsalou, 2008; Bornstein et al., 2004; Landau et al., 1998). From an engineering perspective, the ability to learn representations for multimodal data has many practical applications including image retrieval (Datta et al., 2008) and annotation (Chai and Hung, 2008), text illustration (Joshi et al., 2006), object and scene recognition (Lowe, 1999; Oliva and Torralba, 2007; Fei-Fei and Perona, 2005), and robot navigation (Tellex et al., 2011). One strand of research uses feature norms as a stand-in for sensorimotor experience (Johns and Jones, 2012; Andrews et al., 2009; Steyvers, 2010; Silberer and Lapata, 2012). Feature norms are obtained by asking native speakers to write down attributes they consider important in describing the meaning of a word. The attributes represent perceived physical and functional properties associated with the referents of words. For example, apples are typically green or red, round, shiny, smooth, crunchy, tasty, and so on; dogs have four legs and bark, whereas chairs are used for sitting. Feature norms are instrumental in revealing which dimensions of meaning are psychologically salient, however, their use as a proxy for people’s perceptual representations can itself be problematic (Sloman and Ripps, 1998; Zeigenfuse and Lee, 2010). The number and types of attributes generated can vary substantially as a function of the amount of time devoted to each concept. It is not entirely clear how people generate attributes and whether all of these are important for representing concepts. Finally, multiple participants are required to create a representation for each con572 cept, which limits elicitation studies to a small number of concepts and the scope of any computational model based on feature norms. Another strand of research focuses exclusively on the visual modality, even though the grounding problem could involve auditory, motor, and haptic modalities as well. This is not entirely surprising. Visual input represents a major source of data from which humans can learn semantic representations of linguistic and non-linguistic communicative actions (Regier, 1996). Furthermore, since images are ubiquitous, visual data can be gathered far easier than some of the other modalities. Distributional models that integrate the visual modality have been learned from texts and images (Feng and Lapata, 2010; Bruni et al., 2012b) or from ImageNet (Deng et al., 2009), e.g., by exploiting the fact that images in this database are hierarchically organized according to WordNet synsets (Leong and Mihalcea, 2011). Images are typically represented on the basis of low-level features such as SIFT (Lowe, 2004), whereas texts are treated as bags of words. Our work also focuses on images as a way of physically grounding the meaning of words. We, however, represent them by high-level visual attributes instead of low-level image features. Attributes are not concept or category specific (e.g., animals have stripes and so do clothing items; balls are round, and so are oranges and coins), and thus allow us to express similarities and differences across concepts more easily. Furthermore, attributes allow us to generalize to unseen objects; it is possible to say something about them even though we cannot identify them (e.g., it has a beak and a long tail). We show that this attribute-centric approach to representing images is beneficial for distributional models of lexical meaning. Our attributes are similar to those provided by participants in norming studies, however, importantly they are learned from training data (a database of images and their visual attributes) and thus generalize to new images without additional human involvement. In the following we describe our efforts to create a new large-scale dataset that consists of 688K images that match the same concrete concepts used in the feature norming study of McRae et al. (2005). We derive a taxonomy of 412 visual attributes and explain how we learn attribute classifiers following recent work in computer vision (Lampert et al., 2009; Farhadi et al., 2009). Next, we show that this attribute-based image representation can be usefully integrated with textual data to create distributional models that give a better fit to human word association data over models that rely on human generated feature norms. 2 Related Work Grounding semantic representations with visual information is an instance of multimodal learning. In this setting the data consists of multiple input modalities with different representations and the learner’s objective is to extract a unified representation that fuses the modalities together. The literature describes several successful approaches to multimodal learning using different variants of deep networks (Ngiam et al., 2011; Srivastava and Salakhutdinov, 2012) and data sources including text, images, audio, and video. Special-purpose models that address the fusion of distributional meaning with visual information have been also proposed. Feng and Lapata (2010) represent documents and images by a common multimodal vocabulary consisting of textual words and visual terms which they obtain by quantizing SIFT descriptors (Lowe, 2004). Their model is essentially Latent Dirichlet Allocation (LDA, Blei et al., 2003) trained on a corpus of multimodal documents (i.e., BBC news articles and their associated images). Meaning in this model is represented as a vector whose components correspond to wordtopic distributions. A related model has been proposed by Bruni et al. (2012b) who obtain distinct representations for the textual and visual modalities. Specifically, they extract a visual space from images contained in the ESP-Game data set (von Ahn and Dabbish, 2004) and a text-based semantic space from a large corpus collection totaling approximately two billion words. They concatenate the two modalities and subsequently project them to a lower-dimensionality space using Singular Value Decomposition (Golub et al., 1981). Traditionally, computer vision algorithms describe visual phenomena (e.g., objects, scenes, faces, actions) by giving each instance a categorical label (e.g., cat, beer garden, Brad Pitt, drinking). The ability to describe images by their attributes allows to generalize to new instances for which there are no training examples available. Moreover, attributes can transcend category and task boundaries and thus provide a generic description of visual data. Initial work (Ferrari and Zisserman, 2007) 573 focused on simple color and texture attributes (e.g., blue, stripes) and showed that these can be learned in a weakly supervised setting from images returned by a search engine when using the attribute as a query. Farhadi et al. (2009) were among the first to use visual attributes in an object recognition task. Using an inventory of 64 attribute labels, they developed a dataset of approximately 12,000 instances representing 20 objects from the PASCAL Visual Object Classes Challenge 2008 (Everingham et al., 2008). Visual semantic attributes (e.g., hairy, four-legged) were used to identify familiar objects and to describe unfamiliar objects when new images and bounding box annotations were provided. Lampert et al. (2009) showed that attribute-based representations can be used to classify objects when there are no training examples of the target classes available. Their dataset contained over 30,000 images representing 50 animal concepts and used 85 attributes from the norming study of Osherson et al. (1991). Attribute-based representations have also been applied to the tasks of face detection (Kumar et al., 2009), action identification (Liu et al., 2011), and scene recognition (Patterson and Hays, 2012). The use of visual attributes in models of distributional semantics is novel to our knowledge. We argue that they are advantageous for two reasons. Firstly, they are cognitively plausible; humans employ visual attributes when describing the properties of concept classes. Secondly, they occupy the middle ground between non-linguistic low-level image features and linguistic words. Attributes crucially represent image properties, however by being words themselves, they can be easily integrated in any text-based distributional model thus eschewing known difficulties with rendering images into word-like units. A key prerequisite in describing images by their attributes is the availability of training data for learning attribute classifiers. Although our database shares many features with previous work (Lampert et al., 2009; Farhadi et al., 2009) it differs in focus and scope. Since our goal is to develop distributional models that are applicable to many words, it contains a considerably larger number of concepts (i.e., more than 500) and attributes (i.e., 412) based on a detailed taxonomy which we argue is cognitively plausible and beneficial for image and natural language processing tasks. Our experiments evaluate a number of models previously proposed in the literature and in Attribute Categories Example Attributes color patterns (25) is red, has stripes diet (35) eats nuts, eats grass shape size (16) is small, is chubby parts (125) has legs, has wheels botany;anatomy (25;78) has seeds, has fur behavior (in)animate (55) flies, waddles, pecks texture material (36) made of metal, is shiny structure (3) 2 pieces, has pleats Table 1: Attribute categories and examples of attribute instances. Parentheses denote the number of attributes per category. all cases show that the attribute-based representation brings performance improvements over just using the textual modality. Moreover, we show that automatically computed attributes are comparable and in some cases superior to those provided by humans (e.g., in norming studies). 3 The Attribute Dataset Concepts and Images We created a dataset of images and their visual attributes for the nouns contained in McRae et al.’s (2005) feature norms. The norms cover a wide range of concrete concepts including animate and inanimate things (e.g., animals, clothing, vehicles, utensils, fruits, and vegetables) and were collected by presenting participants with words and asking them to list properties of the objects to which the words referred. To avoid confusion, in the remainder of this paper we will use the term attribute to refer to properties of concepts and the term feature to refer to image features, such as color or edges. Images for the concepts in McRae et al.’s (2005) production norms were harvested from ImageNet (Deng et al., 2009), an ontology of images based on the nominal hierarchy of WordNet (Fellbaum, 1998). ImageNet has more than 14 million images spanning 21K WordNet synsets. We chose this database due to its high coverage and the high quality of its images (i.e., cleanly labeled and high resolution). McRae et al.’s norms contain 541 concepts out of which 516 appear in ImageNet1 and are represented by 688K images overall. The average number of images per concept is 1,310 with the most popular being closet (2,149 images) and the least popular prune (5 images). 1Some words had to be modified in order to match the correct synset, e.g., tank (container) was found as storage tank. 574 behavior eats, walks, climbs, swims, runs diet drinks water, eats anything shape size is tall, is large anatomy has mouth, has head, has nose, has tail, has claws, has jaws, has neck, has snout, has feet, has tongue color patterns is black, is brown, is white botany has skin, has seeds, has stem, has leaves, has pulp color patterns purple, white, green, has green top shape size is oval, is long texture material is shiny behavior rolls parts has step through frame, has fork, has 2 wheels, has chain, has pedals has gears, has handlebar, has bell, has breaks has seat, has spokes texture material made of metal color patterns different colors, is black, is red, is grey, is silver Table 2: Human-authored attributes for bear, eggplant, and bike. The images depicting each concept were randomly partitioned into a training, development, and test set. For most concepts the development set contained a maximum of 100 images and the test set a maximum of 200 images. Concepts with less than 800 images in total were split into 1/8 test and development set each, and 3/4 training set. The development set was used for devising and refining our attribute annotation scheme. The training and test sets were used for learning and evaluating, respectively, attribute classifiers (see Section 4). Attribute Annotation Our aim was to develop a set of visual attributes that are both discriminating and cognitively plausible, i.e., humans would generally use them to describe a concrete concept. As a starting point, we thus used the visual attributes from McRae et al.’s (2005) norming study. Attributes capturing other primary sensory information (e.g., smell, sound), functional/motor properties, or encyclopaedic information were not taken into account. For example, is purple is a valid visual attribute for an eggplant, whereas a vegetable is not, since it cannot be visualized. Collating all the visual attributes in the norms resulted in a total of 673 which we further modified and extended during the annotation process explained below. The annotation was conducted on a per-concept rather than a per-image basis (as for example in Farhadi et al. (2009)). For each concept (e.g., bear or eggplant), we inspected the images in the development set and chose all McRae et al. (2005) visual attributes that applied. If an attribute was generally true for the concept, but the images did not provide enough evidence, the attribute was nevertheless chosen and labeled with <no evidence>. For example, a plum has a pit, but most images in ImageNet show plums where only the outer part of the fruit is visible. Attributes supported by the image data but missing from the norms were added. For example, has lights and has bumper are attributes of cars but are not included in the norms. Attributes were grouped in eight general classes shown in Table 1. Annotation proceeded on a category-by-category basis, e.g., first all foodrelated concepts were annotated, then animals, vehicles, and so on. Two annotators (both co-authors of this paper) developed the set of attributes for each category. One annotator first labeled concepts with their attributes, and the other annotator reviewed the annotations, making changes if needed. Annotations were revised and compared per category in order to ensure consistency across all concepts of that category. Our methodology is slightly different from Lampert et al. (2009) in that we did not simply transfer the attributes from the norms to the concepts in question but refined and extended them according to the visual data. There are several reasons for this. Firstly, it makes sense to select attributes corroborated by the images. Secondly, by looking at the actual images, we could eliminate errors in McRae et al.’s (2005) norms. For example, eight study participants erroneously thought that a catfish has scales. Thirdly, during the annotation process, we normalized synonymous attributes (e.g., has pit and has stone) and attributes that exhibited negligible variations 575 has 2 pieces, has pointed end, has strap, has thumb, has buckles, has heels has shoe laces, has soles, is black, is brown, is white, made of leather, made of rubber climbs, climbs trees, crawls, hops, jumps, eats, eats nuts, is small, has bushy tail has 4 legs, has head, has neck, has nose, has snout, has tail, has claws has eyes, has feet, has toes, diff colours, has 2 legs, has 2 wheels, has windshield, has floorboard, has stand, has tank has mudguard, has seat, has exhaust pipe, has frame, has handlebar, has lights, has mirror has step-through frame, is black, is blue, is red, is white, made of aluminum, made of steel Table 3: Attribute predictions for sandals, squirrel, and motorcycle. in meaning (e.g., has stem and has stalk). Finally, our aim was to collect an exhaustive list of visual attributes for each concept which is consistent across all members of a category. This is unfortunately not the case in McRae et al.’s norms. Participants were asked to list up to 14 different properties that describe a concept. As a result, the attributes of a concept denote the set of properties humans consider most salient. For example, both, lemons and oranges have pulp. But the norms provide this attribute only for the second concept. On average, each concept was annotated with 19 attributes; approximately 14.5 of these were not part of the semantic representation created by McRae et al.’s (2005) participants for that concept even though they figured in the representations of other concepts. Furthermore, on average two McRae et al. attributes per concept were discarded. Examples of concepts and their attributes from our database2 are shown in Table 2. 4 Attribute-based Classification Following previous work (Farhadi et al., 2009; Lampert et al., 2009) we learned one classifier per attribute (i.e., 350 classifiers in total).3 The training set consisted of 91,980 images (with a maximum of 350 images per concept). We used an L2regularized L2-loss linear SVM (Fan et al., 2008) to learn the attribute predictions. We adopted the training procedure of Farhadi al. (2009).4 To learn a classifier for a particular attribute, we used all images in the training data. Images of concepts annotated with the attribute were used as positive examples, and the rest as negative examples. The 2Available from http://homepages.inf.ed.ac.uk/ mlap/index.php?page=resources. 3We only trained classifiers for attributes corroborated by the images and excluded those labeled with <no evidence>. 4http://vision.cs.uiuc.edu/attributes/ data was randomly split into a training and validation set of equal size in order to find the optimal cost parameter C. The final SVM for the attribute was trained on the entire training data, i.e., on all positive and negative examples. The SVM learners used the four different feature types proposed in Farhadi et al. (2009), namely color, texture, visual words, and edges. Texture descriptors were computed for each pixel and quantized to the nearest 256 k-means centers. Visual words were constructed with a HOG spatial pyramid. HOG descriptors were quantized into 1000 k-means centers. Edges were detected using a standard Canny detector and their orientations were quantized into eight bins. Color descriptors were sampled for each pixel and quantized to the nearest 128 k-means centers. Shapes and locations were represented by generating histograms for each feature type for each cell in a grid of three vertical and horizontal blocks. Our classifiers used 9,688 features in total. Table 3 shows their predictions for three test images. Note that attributes are predicted on an imageby-image basis; our task, however, is to describe a concept w by its visual attributes. Since concepts are represented by many images we must somehow aggregate their attributes into a single representation. For each image iw ∈Iw of concept w, we output an F-dimensional vector containing prediction scores scorea(iw) for attributes a = 1,...,F. We transform these attribute vectors into a single vector pw ∈[0,1]1×F, by computing the centroid of all vectors for concept w. The vector is normalized to obtain a probability distribution over attributes given w: pw = (∑iw∈Iw scorea(iw))a=1,...,F ∑F a=1 ∑iw∈Iw scorea(iw) (1) We additionally impose a threshold δ on pw by set576 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Precision Recall Figure 1: Attribute classifier performance for different thresholds δ (test set). ting each entry less than δ to zero. Figure 1 shows the results of the attribute prediction on the test set on the basis of the computed centroids; specifically, we plot recall against precision based on threshold δ.5 Table 4 shows the 10 nearest neighbors for five example concepts from our dataset. Again, we measure the cosine similarity between a concept and all other concepts in the dataset when these are represented by their visual attribute vector pw. 5 Attribute-based Semantic Models We evaluated the effectiveness of our attribute classifiers by integrating their predictions with traditional text-only models of semantic representation. These models have been previously proposed in the literature and were also described in a recent comparative study (Silberer and Lapata, 2012). We represent the visual modality by attribute vectors computed as shown in Equation (1). The linguistic environment is approximated by textual attributes. We used Strudel (Baroni et al., 2010) to obtain these attributes for the nouns in our dataset. Given a list of target words, Strudel extracts weighted word-attribute pairs from a lemmatized and pos-tagged text corpus (e.g., eggplant–cook-v, eggplant–vegetable-n). The weight of each word-attribute pair is a log-likelihood ratio score expressing the pair’s strength of association. In our experiments we learned word-attribute pairs from a lemmatized and pos-tagged (2009) dump of the English Wikipedia.6 In the remainder of this section we will briefly describe the models we 5Threshold values ranged from 0 to 0.9 with 0.1 stepsize. 6The corpus can be downloaded from http://wacky. sslmit.unibo.it/doku.php?id=corpora. Concept Nearest Neighbors boat ship, sailboat, yacht, submarine, canoe, whale, airplane, jet, helicopter, tank (army) rooster chicken, turkey, owl, pheasant, peacock, stork, pigeon, woodpecker, dove, raven shirt blouse, robe, cape, vest, dress, coat, jacket, skirt, camisole, nightgown spinach lettuce, parsley, peas, celery, broccoli, cabbage, cucumber, rhubarb, zucchini, asparagus squirrel chipmunk, raccoon, groundhog, gopher, porcupine, hare, rabbit, fox, mole, emu Table 4: Ten most similar concepts computed on the basis of averaged attribute vectors and ordered according to cosine similarity. used in our study and how the textual and visual modalities were fused to create a joint representation. Concatenation Model Variants of this model were originally proposed in Bruni et al. (2011) and Johns and Jones (2012). Let T ∈RN×D denote a term-attribute co-occurrence matrix, where each cell records a weighted co-occurrence score of a word and a textual attribute. Let P ∈[0,1]N×F denote a visual matrix, representing a probability distribution over visual attributes for each word. A word’s meaning can be then represented by the concatenation of its normalized textual and visual vectors. Canonical Correlation Analysis The second model uses Canonical Correlation Analysis (CCA, Hardoon et al. (2004)) to learn a joint semantic representation from the textual and visual modalities. Given two random variables x and y (or two sets of vectors), CCA can be seen as determining two sets of basis vectors in such a way, that the correlation between the projections of the variables onto these bases is mutually maximized (Borga, 2001). In effect, the representation-specific details pertaining to the two views of the same phenomenon are discarded and the underlying hidden factors responsible for the correlation are revealed. The linguistic and visual views are the same as in the simple concatenation model just explained. We use a kernelized version of CCA (Hardoon et al., 2004) that first projects the data into a higherdimensional feature space and then performs CCA in this new feature space. The two kernel matrices are KT = TT ′ and KP = PP′. After applying CCA we obtain two matrices projected onto l basis vectors, ˜T ∈RN×l, resulting from the projection of the 577 textual matrix T onto the new basis and ˜P ∈RN×l, resulting from the projection of the corresponding visual attribute matrix. The meaning of a word is then represented by ˜T or ˜P. Attribute-topic Model Andrews et al. (2009) present an extension of LDA (Blei et al., 2003) where words in documents and their associated attributes are treated as observed variables that are explained by a generative process. The idea is that each document in a document collection D is generated by a mixture of components {x1,...,xc,...,xC} ∈C, where a component xc comprises a latent discourse topic coupled with an attribute cluster. Inducing these attributetopic components from D with the extended LDA model gives two sets of parameters: word probabilities given components PW(wi|X = xc) for wi, i = 1,...,n, and attribute probabilities given components PA(ak|X = xc) for ak, k = 1,...,F. For example, most of the probability mass of a component x would be reserved for the words shirt, coat, dress and the attributes has 1 piece, has seams, made of material and so on. Word meaning in this model is represented by the distribution PX|W over the learned components. Assuming a uniform distribution over components xc in D, PX|W can be approximated as: PX=xc|W=wi = P(wi|xc)P(xc) P(wi) ≈ P(wi|xc) C ∑ l=1 P(wi|xl) (2) where C is the total number of components. In our work, the training data is a corpus D of textual attributes (rather than documents). Each attribute is represented as a bag-of-concepts, i.e., words demonstrating the property expressed by the attribute (e.g., vegetable-n is a property of eggplant, spinach, carrot). For some of these concepts, our classifiers predict visual attributes. In this case, the concepts are paired with one of their visual attributes. We sample attributes for a concept w from their distribution given w (Eq. (1)). 6 Experimental Setup Evaluation Task We evaluated the distributional models presented in Section 5 on the word association norms collected by Nelson et al. (1998).7 These were established by presenting a large number of participants with a cue word (e.g., rice) and asking them to name an associate 7From http://w3.usf.edu/FreeAssociation/. word in response (e.g., Chinese, wedding, food, white). For each cue, the norms provide a set of associates and the frequencies with which they were named. We can thus compute the probability distribution over associates for each cue. Analogously, we can estimate the degree of similarity between a cue and its associates using our models. The norms contain 63,619 unique cueassociate pairs. Of these, 435 pairs were covered by McRae et al. (2005) and our models. We also experimented with 1,716 pairs that were not part of McRae et al.’s study but belonged to concepts covered by our attribute taxonomy (e.g., animals, vehicles), and were present in our corpus and ImageNet. Using correlation analysis (Spearman’s ρ), we examined the degree of linear relationship between the human cue-associate probabilities and the automatically derived similarity values.8 Parameter Settings In order to integrate the visual attributes with the models described in Section 5 we must select the appropriate threshold value δ (see Eq. (1)). We optimized this value on the development set and obtained best results with δ = 0. We also experimented with thresholding the attribute prediction scores and with excluding attributes with low precision. In both cases, we obtained best results when using all attributes. We could apply CCA to the vectors representing each image separately and then compute a weighted centroid on the projected vectors. We refrained from doing this as it involves additional parameters and assumes input different from the other models. We measured the similarity between two words using the cosine of the angle. For the attribute-topic model, the number of predefined components C was set to 10. In this model, similarity was measured as defined by Griffiths et al. (2007). The underlying idea is that word association can be expressed as a conditional distribution. With regard to the textual attributes, we obtained a 9,394-dimensional semantic space after discarding word-attribute pairs with a log-likelihood ratio score less than 19.9 We also discarded attributes co-occurring with less than two different words. 8Previous work (Griffiths et al., 2007) which also predicts word association reports how many times the word with the highest score under the model was the first associate in the human norms. This evaluation metric assumes that there are many associates for a given cue which unfortunately is not the case in our study which is restricted to the concepts represented in our attribute taxonomy. 9Baroni et al. (2010) use a similar threshold of 19.51. 578 Nelson Concat CCA TopicAttr TextAttr Concat 0.24 CCA 0.30 0.72 TopicAttr 0.26 0.55 0.28 TextAttr 0.21 0.80 0.83 0.34 VisAttr 0.23 0.65 0.52 0.40 0.39 Table 5: Correlation matrix for seen Nelson et al. (1998) cue-associate pairs and five distributional models. All correlation coefficients are statistically significant (p < 0.01, N = 435). 7 Results Our experiments were designed to answer four questions: (1) Do visual attributes improve the performance of distributional models? (2) Are there performance differences among different models, i.e., are some models better suited to the integration of visual information? (3) How do computational models fare against gold standard norming data? (4) Does the attribute-based representation bring advantages over more conventional approaches based on raw image features? Our results are broken down into seen (Table 5) and unseen (Table 6) concepts. The former are known to the attribute classifiers and form part of our database, whereas the latter are unknown and are not included in McRae et al.’s (2005) norms. We report the correlation coefficients we obtain when human-derived cue-associate probabilities (Nelson et al., 1998) are compared against the simple concatenation model (Concat), CCA, and Andrews et al.’s (2009) attribute-topic model (TopicAttr). We also report the performance of a distributional model that is based solely on the output of our attribute classifiers, i.e., without any textual input (VisAttr) and conversely the performance of a model that uses textual information only (i.e., Strudel attributes) without any visual input (TextAttr). The results are displayed as a correlation matrix so that inter-model correlations can also be observed. As can be seen in Table 5 (second column), two modalities are in most cases better than one when evaluating model performance on seen data. Differences in correlation coefficients between models with two versus one modality are all statistically significant (p < 0.01 using a t-test), with the exception of Concat when compared against VisAttr. It is also interesting to note that TopicAttr is the least correlated model when compared against other bimodal models or single modaliNelson Concat CCA TopicAttr TextAttr Concat 0.11 CCA 0.15 0.66 TopicAttr 0.17 0.69 0.48 TextAttr 0.11 0.65 0.25 0.39 VisAttr 0.13 0.57 0.87 0.57 0.34 Table 6: Correlation matrix for unseen Nelson et al. (1998) cue-associate pairs and five distributional models. All correlation coefficients are statistically significant (p < 0.01, N = 1,716). ties. This indicates that the latent space obtained by this model is most distinct from its constituent parts (i.e., visual and textual attributes). Perhaps unsuprisingly Concat, CCA, VisAttr, and TextAttr are also highly intercorrelated. On unseen pairs (see Table 6), Concat fares worse than CCA and TopicAttr, achieving similar performance to TextAttr. CCA and TopicAttr are significantly better than TextAttr and VisAttr (p < 0.01). This indicates that our attribute classifiers generalize well beyond the concepts found in our database and can produce useful visual information even on unseen images. Compared to Concat and CCA, TopicAttr obtains a better fit with the human association norms on the unseen data. To answer our third question, we obtained distributional models from McRae et al.’s (2005) norms and assessed how well they predict Nelson et al.’s (1998) word-associate similarities. Each concept was represented as a vector with dimensions corresponding to attributes generated by participants of the norming study. Vector components were set to the (normalized) frequency with which participants generated the corresponding attribute when presented with the concept. We measured the similarity between two words using the cosine coefficient. Table 7 presents results for different model variants which we created by manipulating the number and type of attributes involved. The first model uses the full set of attributes present in the norms (All Attributes). The second model (Text Attributes) uses all attributes but those classified as visual (e.g., functional, encyclopaedic). The third model (Visual Attributes) considers solely visual attributes. We observe a similar trend as with our computational models. Taking visual attributes into account increases the fit with Nelson’s (1998) association norms, whereas visual and textual attributes on their own perform worse. Interestingly, CCA’s 579 Models Seen All Attributes 0.28 Text Attributes 0.20 Visual Attributes 0.25 Table 7: Model performance on seen Nelson et al. (1998) cue-associate pairs; models are based on gold human generated attributes (McRae et al., 2005). All correlation coefficients are statistically significant (p < 0.01, N = 435). Models Seen Unseen Concat 0.22 0.10 CCA 0.26 0.15 TopicAttr 0.23 0.19 TextAttr 0.20 0.08 VisAttr 0.21 0.13 MixLDA 0.16 0.11 Table 8: Model performance on a subset of Nelson et al. (1998) cue-associate pairs. Seen are concepts known to the attribute classifiers and covered by MixLDA (N = 85). Unseen are concepts covered by LDA but unknown to the attribute classifiers (N = 388). All correlation coefficients are statistically significant (p < 0.05). performance is comparable to the All Attributes model (see Table 5, second column), despite using automatic attributes (both textual and visual). Furthermore, visual attributes obtained through our classifiers (see Table 5) achieve a marginally lower correlation coefficient against human generated ones (see Table 7). Finally, to address our last question, we compared our approach against Feng and Lapata (2010) who represent visual information via quantized SIFT features. We trained their MixLDA model on their corpus consisting of 3,361 BBC news documents and corresponding images (Feng and Lapata, 2008). We optimized the model parameters on a development set consisting of cueassociate pairs from Nelson et al. (1998), excluding the concepts in McRae et al. (2005). We used a vocabulary of approximately 6,000 words. The best performing model on the development set used 500 visual terms and 750 topics and the association measure proposed in Griffiths et al. (2007). The test set consisted of 85 seen and 388 unseen cue-associate pairs that were covered by our models and MixLDA. Table 8 reports correlation coefficients for our models and MixLDA against human probabilities. All attribute-based models significantly outperform MixLDA on seen pairs (p < 0.05 using a t-test). MixLDA performs on a par with the concatenation model on unseen pairs, however CCA, TopicAttr, and VisAttr are all superior. Although these comparisons should be taken with a grain of salt, given that MixLDA and our models are trained on different corpora (MixLDA assumes that texts and images are collocated, whereas our images do not have collateral text), they seem to indicate that attribute-based information is indeed beneficial. 8 Conclusions In this paper we proposed the use of automatically computed visual attributes as a way of physically grounding word meaning. Our results demonstrate that visual attributes improve the performance of distributional models across the board. On a word association task, CCA and the attribute-topic model give a better fit to human data when compared against simple concatenation and models based on a single modality. CCA consistently outperforms the attribute-topic model on seen data (it is in fact slightly better over a model that uses gold standard human generated attributes), whereas the attribute-topic model generalizes better on unseen data (see Tables 5, 6, and 8). Since the attributebased representation is general and text-based we argue that it can be conveniently integrated with any type of distributional model or indeed other grounded models that rely on low-level image features (Bruni et al., 2012a; Feng and Lapata, 2010) In the future, we would like to extend our database to actions and show that this attributecentric representation is useful for more applied tasks such as image description generation and object recognition. Finally, we have only scratched the surface in terms of possible models for integrating the textual and visual modality. Interesting frameworks which we plan to explore are deep belief networks and Bayesian non-parametrics. References M. Andrews, G. Vigliocco, and D. Vinson. 2009. Integrating Experiential and Distributional Data to Learn Semantic Representations. Psychological Review, 116(3):463–498. M. Baroni, B. Murphy, E. Barbu, and M. Poesio. 2010. Strudel: A Corpus-Based Semantic Model 580 Based on Properties and Types. Cognitive Science, 34(2):222–254. L. W. Barsalou. 2008. Grounded Cognition. Annual Review of Psychology, 59:617–845. D. M. Blei, A. Y. Ng, and M. I. Jordan. 2003. Latent Dirichlet Allocation. Journal of Machine Learning Research, 3:993–1022, March. M. Borga. 2001. Canonical Correlation – a Tutorial, January. M. H. Bornstein, L. R. Cote, S. Maital, K. Painter, S.-Y. Park, L. Pascual, M. G. Pˆecheux, J. Ruel, P. Venuti, and A. Vyt. 2004. Cross-linguistic Analysis of Vocabulary in Young Children: Spanish, Dutch, French, Hebrew, Italian, Korean, and American English. Child Development, 75(4):1115–1139. B. B¨orschinger, B. K. Jones, and M. Johnson. 2011. Reducing Grounded Learning Tasks to Grammatical Inference. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1416–1425, Edinburgh, UK. S.R.K. Branavan, H. Chen, L. S. Zettlemoyer, and R. Barzilay. 2009. Reinforcement Learning for Mapping Instructions to Actions. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 82–90, Suntec, Singapore. E. Bruni, G. Tran, and M. Baroni. 2011. Distributional Semantics from Text and Images. In Proceedings of the GEMS 2011 Workshop on GEometrical Models of Natural Language Semantics, pages 22–32, Edinburgh, UK. E. Bruni, G. Boleda, M. Baroni, and N. Tran. 2012a. Distributional Semantics in Technicolor. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 136–145, Jeju Island, Korea. E. Bruni, J. Uijlings, M. Baroni, and N. Sebe. 2012b. Distributional semantics with eyes: Using image analysis to improve computational representations of word meaning. In Proceedings of the 20th ACM International Conference on Multimedia, pages 1219–1228., New York, NY. C. Chai and C. Hung. 2008. Automatically Annotating Images with Keywords: A Review of Image Annotation Systems. Recent Patents on Computer Science, 1:55–68. R. Datta, D. Joshi, J. Li, and J. Z. Wang. 2008. Image Retrieval: Ideas, Influences, and Trends of the New Age. ACM Computing Surveys, 40(2):1–60. J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. FeiFei. 2009. ImageNet: A Large-Scale Hierarchical Image Database. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 248–255, Miami, Florida. M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. 2008. The PASCAL Visual Object Classes Challenge 2008 (VOC2008) Results. http://www.pascalnetwork.org/challenges/VOC/voc2008/workshop. R. Fan, K. Chang, C. Hsieh, X. Wang, and C. Lin. 2008. LIBLINEAR: A Library for Large Linear Classification. Journal of Machine Learning Research, 9:1871–1874. A. Farhadi, I. Endres, D. Hoiem, and D. Forsyth. 2009. Describing Objects by their Attributes. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 1778–1785, Miami Beach, Florida. L. Fei-Fei and P. Perona. 2005. A Bayesian Hierarchical Model for Learning Natural Scene Categories. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 524–531, San Diego, California. C. Fellbaum, editor. 1998. WordNet: an Electronic Lexical Database. MIT Press. Y. Feng and M. Lapata. 2008. Automatic image annotation using auxiliary text information. In Proceedings of ACL-08: HLT, pages 272–280, Columbus, Ohio. Y. Feng and M. Lapata. 2010. Visual Information in Semantic Representation. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 91–99, Los Angeles, California. ACL. V. Ferrari and A. Zisserman. 2007. Learning Visual Attributes. In J.C. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems 20, pages 433–440. MIT Press, Cambridge, Massachusetts. G. H. Golub, F. T. Luk, and M. L. Overton. 1981. A block lanczoz method for computing the singular values and corresponding singular vectors of a matrix. ACM Transactions on Mathematical Software, 7:149–169. P. Gorniak and D. Roy. 2004. Grounded Semantic Composition for Visual Scenes. Journal of Artificial Intelligence Research, 21:429–470. T. L. Griffiths, M. Steyvers, and J. B. Tenenbaum. 2007. Topics in Semantic Representation. Psychological Review, 114(2):211–244. D. R. Hardoon, S. R. Szedmak, and J. R. ShaweTaylor. 2004. Canonical Correlation Analysis: An Overview with Application to Learning Methods. Neural Computation, 16(12):2639–2664. B. T. Johns and M. N. Jones. 2012. Perceptual Inference through Global Lexical Similarity. Topics in Cognitive Science, 4(1):103–120. D. Joshi, J.Z. Wang, and J. Li. 2006. The Story Picturing Engine—A System for Automatic Text illustration. ACM Transactions on Multimedia Computing, Communications, and Applications, 2(1):68–89. 581 R. J. Kate and R. J. Mooney. 2007. Learning Language Semantics from Ambiguous Supervision. In Proceedings of the 22nd Conference on Artificial Intelligence, pages 895–900, Vancouver, Canada. N. Kumar, A. C. Berg, P. N. Belhumeur, and S. K. Nayar. 2009. Attribute and Simile Classifiers for Face Verification. In Proceedings of the IEEE 12th International Conference on Computer Vision, pages 365–372, Kyoto, Japan. C. H. Lampert, H. Nickisch, and S. Harmeling. 2009. Learning To Detect Unseen Object Classes by Between-Class Attribute Transfer. In Computer Vision and Pattern Recognition, pages 951–958, Miami Beach, Florida. B. Landau, L. Smith, and S. Jones. 1998. Object Perception and Object Naming in Early Development. Trends in Cognitive Science, 27:19–24. C. Leong and R. Mihalcea. 2011. Going Beyond Text: A Hybrid Image-Text Approach for Measuring Word Relatedness. In Proceedings of 5th International Joint Conference on Natural Language Processing, pages 1403–1407, Chiang Mai, Thailand. J. Liu, B. Kuipers, and S. Savarese. 2011. Recognizing Human Actions by Attributes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3337–3344, Colorado Springs, Colorado. D. G. Lowe. 1999. Object Recognition from Local Scale-invariant Features. In Proceedings of the International Conference on Computer Vision, pages 1150–1157, Corfu, Greece. D. Lowe. 2004. Distinctive Image Features from Scale-invariant Keypoints. International Journal of Computer Vision, 60(2):91–110. W. Lu, H. T. Ng, W.S. Lee, and L. S. Zettlemoyer. 2008. A Generative Model for Parsing Natural Language to Meaning Representations. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 783–792, Honolulu, Hawaii. K. McRae, G. S. Cree, M. S. Seidenberg, and C. McNorgan. 2005. Semantic Feature Production Norms for a Large Set of Living and Nonliving Things. Behavior Research Methods, 37(4):547–559. D. L. Nelson, C. L. McEvoy, and T. A. Schreiber. 1998. The University of South Florida Word Association, Rhyme, and Word Fragment Norms. J. Ngiam, A. Khosla, M. Kim, J. Nam, H. Lee, and A. Y. Ng. 2011. Multimodal deep learning. In Proceedings of the 28th International Conference on Machine Leanring, pages 689–696, Bellevue, Washington. A. Oliva and A. Torralba. 2007. The Role of Context in Object Recognition. Trends in Cognitive Sciences, 11(12):520–527. D. N. Osherson, J. Stern, O. Wilkie, M. Stob, and E. E. Smith. 1991. Default Probability. Cognitive Science, 2(15):251–269. G. Patterson and J. Hays. 2012. SUN Attribute Database: Discovering, Annotating and Recognizing Scene Attributes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2751–2758, Providence, Rhode Island. T. Regier. 1996. The Human Semantic Potential. MIT Press, Cambridge, Massachusetts. D. Roy and A. Pentland. 2002. Learning Words from Sights and Sounds: A Computational Model. Cognitive Science, 26(1):113–146. C. Silberer and M. Lapata. 2012. Grounded Models of Semantic Representation. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1423–1433, Jeju Island, Korea. J. M. Siskind. 2001. Grounding the Lexical Semantics of Verbs in Visual Perception using Force Dynamics and Event Logic. Journal of Artificial Intelligence Research, 15:31–90. S. A. Sloman and L. J. Ripps. 1998. Similarity as an Explanatory Construct. Cognition, 65:87–101. N. Srivastava and R. Salakhutdinov. 2012. Multimodal learning with deep boltzmann machines. In Proceedings of the 26th Annual Conference on Neural Information Processing Systems, pages 2231–2239, Lake Tahoe, Nevada. M. Steyvers. 2010. Combining feature norms and text data with topic models. Acta Psychologica, 133(3):234–342. S. Tellex, T. Kollar, S. Dickerson, M. R. Walter, A. Gopal Banerjee, S. Teller, and N. Roy. 2011. Understanding Natural Language Commands for Robotic Navigation and Manipulation. In Proceedings of the 25th National Conference on Artificial Intelligence, pages 1507–1514, San Francisco, California. L. von Ahn and L. Dabbish. 2004. Labeling images with a computer game. In Proceeings of the Human Factors in Computing Systems Conference, pages 319–326, Vienna, Austria. C. Yu and D. H. Ballard. 2007. A Unified Model of Early Word Learning Integrating Statistical and Social Cues. Neurocomputing, 70:2149–2165. M. D. Zeigenfuse and M. D. Lee. 2010. Finding the Features that Represent Stimuli. Acta Psychological, 133(3):283–295. J. M. Zelle and R. J. Mooney. 1996. Learning to Parse Database Queries Using Inductive Logic Programming. In Proceedings of the 13th National Conference on Artificial Intelligence, pages 1050–1055, Portland, Oregon. L. S. Zettlemoyer and M. Collins. 2005. Learning to Map Sentences to Logical Form: Structured Classification with Probabilistic Categorial Grammars. In Proceedings of the Conference on Uncertainty in Artificial Intelligence, pages 658–666, Edinburgh, UK. 582
2013
56
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 583–592, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Real-World Semi-Supervised Learning of POS-Taggers for Low-Resource Languages Dan Garrette1 Jason Mielens2 1Department of Computer Science 2Department of Linguistics The University of Texas at Austin The University of Texas at Austin [email protected] {jmielens,jbaldrid}@utexas.edu Jason Baldridge2 Abstract Developing natural language processing tools for low-resource languages often requires creating resources from scratch. While a variety of semi-supervised methods exist for training from incomplete data, there are open questions regarding what types of training data should be used and how much is necessary. We discuss a series of experiments designed to shed light on such questions in the context of part-of-speech tagging. We obtain timed annotations from linguists for the low-resource languages Kinyarwanda and Malagasy (as well as English) and evaluate how the amounts of various kinds of data affect performance of a trained POS-tagger. Our results show that annotation of word types is the most important, provided a sufficiently capable semi-supervised learning infrastructure is in place to project type information onto a raw corpus. We also show that finitestate morphological analyzers are effective sources of type information when few labeled examples are available. 1 Introduction Low-resource languages present a particularly difficult challenge for natural language processing tasks. For example, supervised learning methods can provide high accuracy for part-of-speech (POS) tagging (Manning, 2011), but they perform poorly when little supervision is available. Good results in weakly-supervised tagging have been obtained by training sequence models such as hidden Markov models (HMM) using the Expectation-Maximization algorithm (EM), however most work in this area has still relied on relatively large amounts of data, both annotated and unannotated, as well as an assumption that the annotations are very clean (Kupiec, 1992; Merialdo, 1994). The ability to learn taggers using very little data is enticing: only a tiny fraction of the world’s languages have enough data for standard supervised models to work well. The collection or development of resources is a time-consuming and expensive process, creating a significant barrier for an under-studied language where there are few experts and little funding. It is thus important to develop approaches that achieve good accuracy based on the amount of data that can be reasonably obtained, for example, in just a few hours by a linguist doing fieldwork on a non-native language. Previous work explored learning taggers from weak information, but the type, amount, quality, and sources of data raise questions about the applicability of those results to real-world low-resource scenarios (Toutanova and Johnson, 2008; Ravi and Knight, 2009; Hasan and Ng, 2009; Garrette and Baldridge, 2012). Most research simulated weak supervision with tag dictionaries extracted from existing large, expertly-annotated corpora. These resources have been developed over long periods of time by trained annotators who collaborate to produce high-quality analyses. They are also biased towards including only the most likely tag for each word type, resulting in a cleaner dictionary than one would find in a real scenario. As such, these experiments do not reflect real-world constraints. One exception to this work is Goldberg et al. (2008): they use a manually-constructed lexicon for Hebrew in order to learn an HMM tagger. However, this lexicon was constructed by trained lexicographers over a long period of time and achieves very high coverage of the language with very good quality, much better than could be achieved by our non-expert linguistics graduate student annotators in just a few hours. Cucerzan and Yarowsky 583 (2002) learn a POS-tagger from existing linguistic resources, namely a dictionary and a reference grammar, but these resources are not available, much less digitized, for most under-studied languages. Haghighi and Klein (2006) develop a model in which a POS-tagger is learned from a list of POS tags and just three “prototype” word types for each tag, but their approach requires a vector space to compute the distributional similarity between prototypes and other word types in the corpus. Such distributional models are not feasible for low-resource languages because they require immense amounts of raw text, much more than is available in these settings (Abney and Bird, 2010). Further, they extracted their prototype lists directly from a labeled corpus, something we are specifically avoiding. T¨ackstr¨om et al. (2013) evaluate the use of mixed type and token constraints generated by projecting information from a highresource language to a low-resource language via a parallel corpus. However, large parallel corpora are not available for most low-resource languages. These are also expensive resources to create and would take considerably more effort to produce than the monolingual resources that our annotators were able to generate in a four-hour timeframe. Of course, if they are available, such parallel text links could be incorporated into our approach. In our previous work, we developed a different strategy based on generalizing linguistic input with a computational model: linguists annotated either types or tokens for two hours, these annotations are projected onto a corpus of unlabeled tokens using label propagation and HMMs, and a final POS-tagger is trained on this larger autolabeled corpus (Garrette and Baldridge, 2013). That approach uses much more realistic types and quantities of resources than previous work; nonetheless, it leaves many open questions regarding the effectiveness of incrementally more annotation, the role of unannotated data, and whether there is a good balance to be found using a combination of type- and token-supervision. We also did not consider morphological analyzers as a form of type supervision, as suggested by Merialdo (1994). This paper addresses these questions via a series of experiments designed to quantify the effect on performance given by the amount of time spent finding or annotating training materials. We specifically look at the impact of four types of data collection: 1. Time annotating sentences (token supervision) 2. Time creating tag dictionary (type supervision) 3. Time constructing a finite state transducer (FST) to analyze word-type morphology 4. Amount of raw data available for training We explore these strategies in the context of POStagging for Kinyarwanda and Malagasy. We also include experiments for English, pretending as though it is a low-resource language. The overwhelming take away from our results is that type supervision—when backed by an effective semisupervised learning approach—is the most important source of linguistic information. Also, morphological analyzers help for morphologically rich languages when there are few labeled types or tokens (and, it never hurts to use them). Finally, performance improves with more raw data, though we see diminishing returns past 400,000 tokens. With just four hours of type annotation, our system obtains good accuracy across the three languages: 89.8% on English, 81.9% on Kinyarwanda, and 81.2% on Malagasy. Our results compare favorably with previous work despite using considerably less supervision and a more difficult set of tags. For example, Li et al. (2012) use the entirety of English Wiktionary directly as a tag dictionary to obtain 87.1% accuracy on English, below our result. T¨ackstr¨om et al. (2013) average 88.8% across 8 major languages, but for Turkish, a morphologically rich language, they achieve only 65.2%, significantly below our 81.9% for morphologically-rich Kinyarwanda. 2 Data Kinyarwanda (KIN) and Malagasy (MLG) are lowresource, KIN is morphologically rich, and English (ENG) is used for comparison. For each language, sentences were divided into four sets: training data to be labeled by annotators, raw training data, development data, and test data. Data sources The KIN texts are transcripts of testimonies by survivors of the Rwandan genocide provided by the Kigali Genocide Memorial Center. The MLG texts are articles from the websites1 Lakroa and La Gazette and Malagasy Global Voices.2 Texts in both KIN and MLG were tok1www.lakroa.mg and www.lagazette-dgi.com 2mg.globalvoicesonline.org/ 584 KIN MLG ENG - Experienced ENG - Novice time type token type token type token type token 1:00 801 559 (1093) 660 422 (899) 910 522 (1124) 210 308 (599) 2:00 1814 948 (2093) 1363 785 (1923) 2660 1036 (2375) 631 646 (1429) 3:00 2539 1324 (3176) 2043 1082 (3064) 4561 1314 (3222) 1350 953 (2178) 4:00 3682 1651 (4119) 2773 1378 (4227) 6598 1697 (4376) 2185 1220 (2933) Table 1: Annotations for each language and annotator as time increases. Shows the number of tag dictionary entries from type annotation vs. token. (The count of labeled tokens is shown in parentheses). For brevity, the table only shows hourly progress. enized and labeled with POS tags by two linguistics graduate students, each of which was studying one of the languages. The KIN and MLG data have 12 and 23 distinct POS tags, respectively. The Penn Treebank (PTB) (Marcus et al., 1993) is used as ENG data. Section 01 was used for token-supervised annotation, sections 02-14 were used as raw data, 15-18 for development of the FST, 19-21 as a dev set and 22-24 as a test set. The PTB uses 45 distinct POS tags. Collecting annotations Linguists with nonnative knowledge of KIN and MLG produced annotations for four hours (in 30-minute intervals) for two tasks. In the first task, type-supervision, the annotator was given a list of the words in the target language (ranked from most to least frequent), and they annotated each word type with its potential POS tags. The word types and frequencies used for this task were taken from the raw training data and did not include the test sets. In the second task, token-supervision, full sentences were annotated with POS tags. The 30-minute intervals allow us to investigate the incremental benefit of additional annotation of each type as well as how both annotation types might be combined within a fixed annotation budget. Baldridge and Palmer (2009) found that annotator expertise greatly influences effectiveness of active learning for morpheme glossing, a related task. To see how differences in annotator speed and quality impact our task, we obtained ENG data from an experienced annotator and a novice one. Ngai and Yarowsky (2000) investigated the effectiveness of rule-writing versus annotation (using active learning) for chunking, and found the latter to be far more effective. While we do not explore a rule-writing approach to POS-tagging, we do consider the impact of rule-based morphological analyzers as a component in our semisupervised POS-tagging system. ENG - Exp. ENG - Nov. time type tok type tok 1:00 0.05 0.03 0.01 0.02 2:00 0.15 0.05 0.03 0.03 3:00 0.24 0.06 0.07 0.05 4:00 0.32 0.08 0.11 0.06 Table 2: Tag dictionary recall against the test set for ENG annotators on type and token annotations. Annotations Table 1 gives statistics for all languages and annotators showing progress during the 4-hour tasks. With token-annotation, tag dictionary growth slows because high-frequency words are repeatedly annotated, producing only additional frequency and sequence information. In contrast, every type-annotation label is a new tag dictionary entry. For types, growth increases over time, reflecting the fact that high-frequency words (which are addressed first) tend to be more ambiguous and thus require more careful thought than later words. For ENG, we can compare the tagging speed of the experienced annotator with the novice: 50% more tokens and 3 times as many types. The token-tagging speed stayed fairly constant for the experienced annotator, but the novice increased his rate, showing the result of practice. Checking the annotators’ output against the gold tags in the PTB shows that both had good tagging accuracy on tokens: 94-95%. Comparing the tag dictionary entries versus the test data, precision starts in the high 80%s and falls to to the mid-70%s in all cases. However, the differences in recall, shown in Table 2, are more interesting. On types, the experienced annotator maxed out at 32%, but the novice only reaches 11%. Moreover, the maximum for token annotations is much lower due to high repeat-annotation. The discrepancies between experienced and novice, and between type and token recall explain a great deal of the performance disparity seen in the experiments. 585 3 Morphological Transducers Finite-state transducers (FSTs) accept regular languages and can be constructed easily using regular expressions, which makes them quite useful for phonology, morphology and limited areas of syntax (Karttunen, 2001). Past work has used FSTs for direct POS-tagging (Roche and Schabes, 1995), but this requires tight coupling between the FST and target tagset. We use FSTs for morphological analysis: the FST accepts a word type and produces a set of morphological features. If there are multiple possible analyses for a given word type, the FST returns them all. For instance the Kinyarwanda verb sibatarazuka “he is not yet resurrected” is analyzed in several ways: • +NEG+CL2+1PL+V+arazuk+IMP • +NEG+CL2+NOT.YET+PRES+zuk+IMP • +NEG+CL2+NOT.YET+razuk+IMP FSTs are particularly valuable for their ability to analyze out-of-vocabulary items. By looking for known affixes, FSTs can guess the stem of a word and produce an analysis despite not having knowledge of that stem. For morphologically complex languages like KIN, this ability is especially useful. Other factors, such as a large number of morphologically-conditioned phonological changes (seen in MLG) make out-of-vocabulary guessing more challenging because of the large number of potential stems (high ambiguity). Development of the FSTs for all three languages was done by iteratively adding rules and lexical items with the goal of increasing coverage on a raw dataset. To accomplish this on a fixed time budget, the most frequently occurring unanalyzed tokens were examined, and their stems plus any observable morphological or phonological patterns were added to the transducer. Additionally, developers searched for known morphological alternations to locate instances of phonological change for inclusion. Coverage was checked against a raw dataset which did not include the test data used for the POS experiments. The KIN and MLG FSTs were created by English-speaking linguists who were familiar with their respective language. They also used dictionaries and grammars. Each FST was developed in 10 hours. To evaluate the benefits of more development time, a version of the English FST was saved every 30 minutes, as shown in Table 3. elapsed time tokens types count pct count pct 2:00 130k 61% 2.1k 12% 4:00 159k 75% 4.1k 24% 6:00 170k 80% 6.7k 39% 8:00 182k 86% 7.7k 44% 10:00 192k 91% 10.7k 62% Table 3: Coverage of the English morphological FST during development. For brevity, showing 2hour increments instead of 30-minute segments. tokens types cov. ambig. cov. ambig. KIN 86% 2.62 82% 5.31 MLG 78% 2.98 37% 1.13 ENG 91% 1.19 62% 1.97 Table 4: Coverage and ambiguity of the final FST for each language. 4 Approach Learning under low-resource conditions is more difficult than scenarios in most previous POS work because the vast majority of the word types in the training and test data are not covered by the annotations. When most words are unknown, learning algorithms such as EM struggle (Garrette and Baldridge, 2012). Recall that most work on learning POS-taggers from tag dictionaries used tag dictionaries culled from test sets (even when considering incomplete dictionaries). We thus build on our previous approach, which exploits extremely sparse, human-generated annotations that are produced without knowledge of which words appear in the test set (Garrette and Baldridge, 2013). This approach generalizes a small initial tag dictionary to include unannotated word types appearing in raw data. It estimates word/tag pair and tag-transition frequency information using modelminimization, which also reduces noise introduced by automatic tag dictionary expansion. The approach exploits type annotations effectively to learn parameters for out-of-vocabulary words and infer missing frequency and sequence information. This pipeline is described in detail in the previous work, so we give only a brief overview and describe our additions. The purpose of tag dictionary expansion is to estimate label distributions for tokens in a raw cor586 pus, including words missing in the annotations. For this, a graph connecting annotated words to unannotated words via features is constructed and POS labels are pushed between these items using label propagation (LP) (Talukdar and Crammer, 2009). LP has been used successfully for extending POS labels from high-resource languages to low via parallel corpora (Das and Petrov, 2011; T¨ackstr¨om et al., 2013; Ding, 2011) or high- to low-resource domains (Subramanya et al., 2010), among other tasks. These works have typically used n-gram features (capturing basic syntax) and character affixes (basic morphology). The character n-gram affix-as-morphology approach produces many features, but only a fraction of them represent actual morphemes. Incorrect features end up pushing noise around the graph, so affixes can lead to more false labels that drown out the true labels. While affixes may be sufficient for languages with limited morphology, their effectiveness diminishes for morphology-rich languages, which have much higher type-to-token ratios. More types means sparser word frequency statistics and more out-of-vocabulary items, and thus problems for EM. Here, we modify the LP graph by supplementing or replacing generic affix features with a focused set of morphological features produced by an FST. These targeted morphological features are effective during LP because words that share them are much more likely to actually share POS tags. FSTs produce multiple analyses, which is actually advantageous for LP. Ambiguities need not be resolved since we just take the union of all morphological features for all analyses and use them as features in the graph. Note that each FST produces its own POS-tags as features, but these do not correspond to the target POS tagset used by the tagger. This is important because it decouples FST development and the final POS task. Thus, any FST for the language, regardless of its provenance, can be used with any target POS tagset. Since the LP graph contains a node for each corpus token, and each node is labeled with a distribution over POS tags, the graph provides a corpus of sentences labeled with noisy tag distributions along with an expanded tag dictionary. This output is useful as input to EM because it contains labels for all seen word types as well as sequence and frequency information. There is a high degree of noise in the LP output, so we employ the model minimization strategy of Ravi et al. (2010), which finds a minimal set of tag bigrams needed to explain the sentences in the raw corpus. It outputs a corpus of tagged sentences, which are used as a good starting point for EM training of an HMM. The expanded tag dictionary constrains the EM search space by providing a limited tagset for each word type, steering EM towards a desirable result. Because the HMM trained by EM will contain zero-probabilities for words that did not appear in the training corpus, we use the “autosupervision” step from our previous work: a Maximum Entropy Markov Model tagger is trained on a corpus that is noisily labeled by the HMM (Garrette and Baldridge, 2012). While training an HMM before the MEMM is not strictly necessary, our tests have shown that this generativethen-discriminative combination generally results in around 3% accuracy improvement. 5 Experiments3 To better understand the effect that each type of supervision has on tagger accuracy, we perform a series of experiments, with KIN and MLG as true low-resource languages. English experiments, for which we had both experienced and novice annotators, allow for further exploration into issues concerning data collection and preparation. The overall best accuracies achieved by language are 81.9% for KIN using all types, 81.2% for MLG using half types and half tokens, and 89.8% for ENG using all types and the maximal amount of raw data. All of these best values were achieved using both FST and affix LP features. All results described in this section are averaged over five folds of raw data. 5.1 Types versus tokens Our primary question was the relationship between annotation type and time. Annotation must be done by someone familiar with the target language, linguistics, and the target POS tagset. For many low-resource languages, such people, and the time they have to spend, are likely to be in short supply. To make the best use of their time, we need to know which annotations are most use3Code and all MLG data available at github.com/ dhgarrette/low-resource-pos-tagging-2013 We are unable to provide the KIN or ENG data for download due to licensing restrictions. However, ENG data may be shared with those holding a license for the Penn Treebank and KIN data may be shared on a case-by-case basis. 587 (a) KIN type annotations − Elapsed Annotation Time Accuracy 0:30 1:00 1:30 2:00 2:30 3:00 3:30 4:00 50 55 60 65 70 75 80 No LP Affixes only FST only Affixes+FST (b) KIN token annotations − Elapsed Annotation Time Accuracy 0:30 1:00 1:30 2:00 2:30 3:00 3:30 4:00 50 55 60 65 70 75 80 No LP Affixes only FST only Affixes+FST (c) MLG type annotations − Elapsed Annotation Time Accuracy 0:30 1:00 1:30 2:00 2:30 3:00 3:30 4:00 65 70 75 80 No LP Affixes only FST only Affixes+FST (d) MLG token annotations − Elapsed Annotation Time Accuracy 0:30 1:00 1:30 2:00 2:30 3:00 3:30 4:00 65 70 75 80 No LP Affixes only FST only Affixes+FST Figure 1: Annotation time vs. tagger accuracy for type-only and token-only annotations. Elapsed Annotation Time Accuracy 0:30 1:00 1:30 2:00 2:30 3:00 3:30 4:00 65 70 75 80 85 Experienced annotator − Types Experienced annotator − Tokens Novice annotator − Types Novice annotator − Tokens Figure 2: Annotation time vs. tagger accuracy for ENG type-only and token-only annotations with affix and FST LP features. ful so that efforts can be concentrated there. Additionally, it is useful to identify when returns on annotation effort diminish so that annotators do not spend time doing work that is unlikely to add much value. The annotators produced four hours each of type and token annotations, each in 30-minute increments. To assess the effects of annotation time, we trained taggers cumulatively on each increment and determine the value of each additional halfhour of effort. Results are shown for KIN and MLG in Figure 1 and ENG in Figure 2. In all scenarios, the use of LP (and model minimization) delivers huge performance gains. Additionally, the use of FST features, usually along with affixes, yielded better results than without. This indicates the LP procedure makes effective use of the morphological features produced by the FST and that the affix features are able to capture missing information without adding too much noise to the LP graph. Furthermore, performance is considerably better when type annotations are used than only tokens. Type annotations plateau much faster, so a shorter amount of time must be spent annotating types than if token annotations are used. For KIN it takes approximately 1.5 hours to reach nearmaximum accuracy for types, but 2.5 hours for tokens. This difference is due to the fact that the type annotations started with the most frequent words whereas the token annotations were on random sentences. Thus, type annotations quickly cover a significant portion of the language’s tokens. With annotations directly on tokens, some of the highest 588 (a) KIN − Type/Token Annotation Mixture Accuracy t0/s8 t1/s7 t2/s6 t3/s5 t4/s4 t5/s3 t6/s2 t7/s1 t8/s0 60 65 70 75 80 No LP Affixes only FST only Affixes+FST (b) MLG − Type/Token Annotation Mixture Accuracy t0/s8 t1/s7 t2/s6 t3/s5 t4/s4 t5/s3 t6/s2 t7/s1 t8/s0 70 72 74 76 78 80 No LP Affixes only FST only Affixes+FST Figure 3: Annotation mixture vs. tagger accuracy. X-axis labels give annotation proportions, e.g. “t2/s6” indicates 2/8 of the time (1 hour) was spent annotating types and 6/8 (3 hours), full sentences. Type/Token Annotation Mixture Accuracy t0/s8 t1/s7 t2/s6 t3/s5 t4/s4 t5/s3 t6/s2 t7/s1 t8/s0 70 75 80 85 Exp. − With LP Nov. − With LP Exp. − No LP Nov. − No LP Figure 4: Annotation mixture vs. tagger accuracy on ENG using affix and FST LP features for experienced (Exp.) and novice (Nov.) annotators. frequency types are covered, but annotation time is also ineffectively used on low-frequency types that happen to appear in those sentences. Finally, the use of FST features yields the largest gains for KIN, but only when small amounts of annotation are available. This makes sense: KIN is a morphologically rich language, so sparsity is greater and crude affixes capture less actual morphology. With little annotated data, LP relies heavily on morphological features to make clean links between words. But, with more annotations, the gains of the FST over affix features alone diminishes: the affix features eventually capture enough of the morphology to make up the difference. Figure 2 shows the dramatic differences between the experienced and novice ENG annotators.4 For the former, results using types and to4The ENG graph omits “No LP” results since they followed patterns similar to KIN and MLG. Additionally, the results without FST features are not shown because they were nearly identical (though slightly lower) than with the FST. kens were similar after 30 minutes, but type annotations proved much more useful beyond that. In contrast, the novice annotated types much more slowly, so early on there were not enough annotated types for the training to be as effective. Even so, after three hours of annotation, type annotations still win with the novice, and even beat the experienced annotator labeling tokens. 5.2 Mixing type and token annotations Because type and token annotations are each better at providing different information — a tag dictionary of high-frequency words vs. sequence and frequency information — it is reasonable to expect that a combination of the two might yield higher performance by each contributing different but complementary information during training. This matters in low-resource settings because type or token annotations will likely be produced by the same people, so there is a tradeoff between spending resources on one form of annotation over the other. Understanding the best mixture of annotations can inform us on how to maximize the benefit of a set annotation budget. To this end, we ran experiments fixing the annotation time to four hours while varying the mix of type and token annotations. Results are shown for KIN and MLG in Figure 3 and ENG in Figure 4. For KIN and ENG, tagger accuracy increases as the proportion of type annotations increases for all LP feature configurations. For MLG, however, as the reliance on the FST increases, the optimal mixture shifts toward higher type proportions. When only affix features are used, the optimal mixture is 1 hour of types and 3 hours of tokens. When FST and affix features are used, the optimum is 2 hours 589 each of types and tokens. When only FST features are used, it is best to use 3.5 hours of types and only 30 minutes of tokens. Because the FST operates on word types, it is effective at exploiting type annotations. Thus, when the LP focuses more on FST features, it becomes more desirable to have larger amounts of type annotations. Types clearly win for ENG. The experienced annotator was much faster at annotating types and the speed difference was less pronounced for tokens, so accuracy is most similar when only token annotations are used. The performance disparity grows with increasing the type proportion. T¨ackstr¨om et al. (2013) explore the use of mixed type and token annotations in which a tagger is learned by projecting information via parallel text. In their experiments, they—like us— found that type information is more valuable than token information. However, they were able to see gains through the complementary effects of mixing type and token annotations. It is likely that this difference in our results is due to the amount of annotated data used. It seems that the amount of type information collected in four hours is not sufficient to saturate the system, meaning that switching to annotating tokens tends to hurt performance. 5.3 FST development The third set of experiments evaluate how the amount of time spent developing an FST affects the performance of trained tagger. To do this, we had our ENG FST developer save progress after each hour (for ten hours). The results show that, for ENG, the FST provided no value, regardless of how much time was spent on its development. Moreover, since large gains in accuracy can be achieved by spending a small amount of time just annotating word types with POS tags, we are led to conclude that time should be spent annotating types or tokens instead of developing an FST. While it is likely that FST development time would have a greater impact for morphologically rich languages, we suspect that greater gains can still be obtained by instead annotating types. Nonetheless, FSTs never seems to hurt performance, so if one is readily available, it should be used. 5.4 The effect of more raw data In addition to annotations, semi-supervised tagger training requires a corpus of raw text. Raw data can be easier to acquire since it does not need the attention of a linguist. Even so, for many Number of Raw Data Tokens Accuracy 100k 200k 300k 400k 500k 600k 80 82 84 86 88 90 4hr types, FST, With LP 4hr types, FST, No LP 1hr types, No FST, With LP Figure 5: Amount of raw data vs. tagger accuracy for ENG using high vs. low amounts of annotation and using LP vs. no LP., for experienced annotator (novice results were similar). low-resource languages, the amount of digitized text, such as transcripts or websites, is very limited and may, in fact, require substantial effort to accumulate, even with assistance from computational tools (Bird, 2011). Therefore, the collection of raw data can be considered another time-sensitive task for which the tradeoffs with previously-discussed annotation efforts must contend. It could be the case that more raw data for training could make up for additional annotation and FST development effort or make the LP procedure unnecessary. Figure 5 shows that that increased raw data does provide increasing gains, but they diminish after 200k tokens. The best performance is achieved by using more annotation and LP. Most importantly, however, removing either annotations or LP results in a significant decline in accuracy, such that even with 600k training tokens, we are unable to achieve the results of high annotation and LP using only 100k tokens. 5.5 Correcting existing annotations For all of the ENG experiments, we also ran “oracle” experiments using gold tags for the same sentences or a tag dictionary containing the same number of type/tag entries as the annotator produced, but containing only the most frequent entries as determined by the gold-labeled corpus. Using this simulated “perfect annotator” data shows we lose accuracy due to annotator mistakes: for our experienced annotator and maximal FST, using 4 hours of types the oracle accuracy is 90.5 vs. 88.5 while using only tokens we see 83.9 vs. 590 81.5. This indicates that there are gains to be made by correcting mistakes in the annotations. This is true even after the point of diminishing returns on the learning curve, meaning that even when adding more annotations no longer improves performance, progress can still be made by correcting errors, so it may be reasonable to ask annotators to attempt to correct errors in their past annotations. Automated techniques for facilitating error identification can be employed for this (Dickinson and Meurers, 2003). 6 Conclusions and Future Work Care must be taken when drawing conclusions from small-scale annotation studies such as those presented in this paper. Nonetheless, we have explored realistic annotation scenarios for POStagging for low-resource languages and found several consistent patterns. Most importantly, it is clear that type annotations are the most useful input one can obtain from a linguist—provided a semi-supervised algorithm for projecting that information reliably onto raw tokens is available. In a sense, this result validates the research trajectory of efforts of the past two decades put into learning taggers from tag dictionaries: papers have successively removed layers of unrealistic assumptions, and in doing so have produced pipelines for typesupervision that easily beat token-supervision prepared in comparable amounts of time. The result of most immediate practical value is that we show it is possible to train effective POStaggers on actual low-resource languages given only a relatively small amount of unlabeled text and a few hours of annotation by a non-native linguist. Instead of having annotators label full sentences as one might expect the natural choice would be, it is much more effective to simply extract a list of the most frequent word types in the language and concentrate efforts on annotating these types with their potential parts of speech. Furthermore, for languages with rich morphology, a morphological transducer can yield significant performance gains when large amounts of other annotated resources are unavailable. (And it never hurts performance.) Finally, additional raw text does improve performance. However, using substantial amounts of raw text is unlikely to produce gains larger than only a few hours spent annotating types. Thus, when deciding whether to spend time locating larger volumes of digitized text or to spend time annotating types, choose types. Despite the consistent superiority of type annotations in our experiments, it of course may be the case that techniques such as active learning may better select sentences for token annotation, so this should be explored in future work. Acknowledgements We thank Kyle Jerro, Vijay John, Jim Evans, Yoav Goldberg, Slav Petrov, and the reviewers for their assistance and feedback. This work was supported by the U.S. Department of Defense through the U.S. Army Research Office (grant number W911NF-10-1-0533) and through a National Defense Science and Engineering Graduate Fellowship for the first author. Experiments were run on the UTCS Mastodon Cluster, provided by NSF grant EIA-0303609. References Steven Abney and Steven Bird. 2010. The human language project: Building a universal corpus of the worlds languages. In Proceedings of ACL. Jason Baldridge and Alexis Palmer. 2009. How well does active learning actually work? Time-based evaluation of cost-reduction strategies for language documentation. In Proceedings of EMNLP, Singapore. Steven Bird. 2011. Bootstrapping the language archive: New prospects for natural language processing in preserving linguistic heritage. Linguistic Issues in Language Technology, 6. Silviu Cucerzan and David Yarowsky. 2002. Bootstrapping a multilingual part-of-speech tagger in one person-day. In Proceedings of CoNLL, Taipei, Taiwan. Dipanjan Das and Slav Petrov. 2011. Unsupervised part-of-speech tagging with bilingual graph-based projections. In Proceedings of ACL-HLT, Portland, Oregon, USA. Markus Dickinson and W. Detmar Meurers. 2003. Detecting errors in part-of-speech annotation. In Proceedings of EACL. Weiwei Ding. 2011. Weakly supervised part-ofspeech tagging for Chinese using label propagation. Master’s thesis, University of Texas at Austin. Dan Garrette and Jason Baldridge. 2012. Typesupervised hidden Markov models for part-ofspeech tagging with incomplete tag dictionaries. In Proceedings of EMNLP, Jeju, Korea. 591 Dan Garrette and Jason Baldridge. 2013. Learning a part-of-speech tagger from two hours of annotation. In Proceedings of NAACL, Atlanta, Georgia. Yoav Goldberg, Meni Adler, and Michael Elhadad. 2008. EM can find pretty good HMM POS-taggers (when given a good start). In Proceedings ACL. Aria Haghighi and Dan Klein. 2006. Prototypedriven learning for sequence models. In Proceedings NAACL. Kazi Saidul Hasan and Vincent Ng. 2009. Weakly supervised part-of-speech tagging for morphologically-rich, resource-scarce languages. In Proceedings of EACL, Athens, Greece. Lauri Karttunen. 2001. Applications of finite-state transducers in natural language processing. Lecture Notes in Computer Science, 2088. Julian Kupiec. 1992. Robust part-of-speech tagging using a hidden Markov model. Computer Speech & Language, 6(3). Shen Li, Jo˜ao Grac¸a, and Ben Taskar. 2012. Wiki-ly supervised part-of-speech tagging. In Proceedings of EMNLP, Jeju Island, Korea. Christopher D. Manning. 2011. Part-of-speech tagging from 97% to 100%: Is it time for some linguistics? In Proceedings of CICLing. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2). Bernard Merialdo. 1994. Tagging English text with a probabilistic model. Computational Linguistics, 20(2). Grace Ngai and David Yarowsky. 2000. Rule writing or annotation: Cost-efficient resource usage for base noun phrase chunking. In Proceedings ACL. Sujith Ravi and Kevin Knight. 2009. Minimized models for unsupervised part-of-speech tagging. In Proceedings of ACL-AFNLP. Sujith Ravi, Ashish Vaswani, Kevin Knight, and David Chiang. 2010. Fast, greedy model minimization for unsupervised tagging. In Proceedings of COLING. Emmanuel Roche and Yves Schabes. 1995. Deterministic part-of-speech tagging with finite-state transducers. Computational Linguistics, 21(2). Amarnag Subramanya, Slav Petrov, and Fernando Pereira. 2010. Efficient graph-based semisupervised learning of structured tagging models. In Proceedings EMNLP, Cambridge, MA. Oscar T¨ackstr¨om, Dipanjan Das, Slav Petrov, Ryan McDonald, and Joakim Nivre. 2013. Token and type constraints for cross-lingual part-of-speech tagging. In Transactions of the ACL. Association for Computational Linguistics. Partha Pratim Talukdar and Koby Crammer. 2009. New regularized algorithms for transductive learning. In Proceedings of ECML-PKDD, Bled, Slovenia. Kristina Toutanova and Mark Johnson. 2008. A Bayesian LDA-based model for semi-supervised part-of-speech tagging. In Proceedings of NIPS. 592
2013
57
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 593–603, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Using subcategorization knowledge to improve case prediction for translation to German Marion Weller1 Alexander Fraser2 Sabine Schulte im Walde1 1Institut f¨ur Maschinelle 2Centrum f¨ur InformationsSprachverarbeitung und Sprachverarbeitung Universit¨at Stuttgart Ludwig-Maximilians-Universit¨at M¨unchen {wellermn|schulte}@ims.uni-stuttgart.de [email protected] Abstract This paper demonstrates the need and impact of subcategorization information for SMT. We combine (i) features on sourceside syntactic subcategorization and (ii) an external knowledge base with quantitative, dependency-based information about target-side subcategorization frames. A manual evaluation of an English-toGerman translation task shows that the subcategorization information has a positive impact on translation quality through better prediction of case. 1 Introduction When translating from a morphologically poor language to a morphologically rich language we are faced with two major problems: (i) the richness of the target-language morphology causes data sparsity problems, and (ii) information about morphological features on the target side is not sufficiently contained in the source language morphology. We address these two problems using a twostep procedure. We first replace inflected forms by their stems or lemmas: building a translation system on a stemmed representation of the target side leads to a simpler translation task, and the morphological information contained in the source and target language parts of the translation model is more balanced. In the second step, the stemmed output of the translation is then inflected: the morphological features are predicted, and the inflected forms are generated using the stem and predicted morphological features. In this paper, we focus on improving case prediction for noun phrases (NPs) in German translations. The NP feature case is extremely difficult to predict in German: while the NP features gender and number are part of the stem or can be derived from the source-side input, respectively, the prediction of case requires information about the subcategorization of the entire clause. This is due to German being a less configurational language than English, which encodes grammatical relations (e.g. subject-hood, object-hood, etc.) through the position of constituents. German sentences exhibit a freer constituent order, and thus case is an important indicator of the grammatical functions of noun phrases. Correct case prediction is a crucial factor for the adequacy of SMT output, cf. the example in table 1 providing an erroneously inflected output (this is taken from a baseline “simple inflection prediction” system, cf. section 5.2). The translation of the English input sentence in terms of stems is perfectly acceptable; after the inflection step, however, the translation of NP4 ongoing military actions represents a genitive modifier of the subject NP2, instead of a direct object NP of the verb anordnen (to order). The meaning is thus why the government of the ongoing military actions ordered, which has only one NP and is completely wrong. The translation in table 1 needs verb subcategorization information. This is demonstrated by the invented examples (1) and (2): (1) [Der Mitarbeiter]NP nom hat [den Bericht]NP acc [dem Kollegen]NP dat gegeben. [The employee]NP nom gave [his colleague]NP dat [the report]NP acc (2) [Der Mitarbeiter]NP nom hat [dem Bericht]NP dat [des Kollegen]NP gen zugestimmt. [The employee]NP nom agreed [on the report]P P [of his colleague]P P Both inflected sentences rely on the stem sequence [d Mitarbeiter] [d Bericht] [d Kollege] ⟨verb⟩, so the case assignment can only be determined by the verb: While geben ( to give) has a strong preference for selecting a ditransitive subcategorization frame1, including an agentive subject (nomi1A ditransitive verb takes a subject and two objects. 593 input [why]1 [the government]2 [ordered]3 [the ongoing military actions]4 output stemmed [warum]1 [d Regierung]2 [d anhaltend milit¨arisch Aktion]4 [angeordnet]3 inflected [warum]1 [die Regierung]2 [der anhaltenden milit¨arischen Aktionen]4 [angeordnet]3 Table 1: Example for case confusion in SMT output when using a simple prediction system. native case), a benefactive (dative case) and a patient (accusative case), zustimmen (to agree) has a strong preference for only selecting an agentive subject (nominative case) and an indirect object theme (dative case). So in the latter case the NP [d Kollege] cannot receive case from the verb and is instead the genitive modifier of the dative NP. While for examples (1) and (2) knowledge about the syntactic verb subcategorization functions is sufficient to correctly predict the NP cases, examples (3) to (6) require subcategorization information at the syntax-semantic interface. (3) [Der Mitarbeiter]NP nom hat [dem Kollegen]NP dat [den Bericht]NP acc gegeben. (4) [Der Mitarbeiter]NP nom hat [den Bericht]NP acc [dem Kollegen]NP dat gegeben. (5) [Dem Kollegen]NP dat hat [der Mitarbeiter]NP nom [den Bericht]NP acc gegeben. (6) [Den Bericht]NP acc hat [der Mitarbeiter]NP nom [dem Kollegen]NP dat gegeben. In all four examples, the verb and the participating noun phrases Mitarbeiter (employee), Kollege (colleague) and Bericht (report) are identical, and the noun phrases are assigned the same case. However, given that the stemmed output of the translation does not tell us anything about case features, in order to predict the appropriate cases of the three noun phrases, we either rely on ordering heuristics (such that the nominative NP is more likely to be in the beginning of the sentence (the German Vorfeld) than the accusative or dative NP, even though all three of these would be grammatical), or we need fine-grained subcategorization information beyond pure syntax. For example, both Mitarbeiter and Kollege would satisfy the agentive subject role of the verb geben better than Bericht, and Bericht is more likely to be the patient of geben. The contribution of this paper is to improve the prediction of case in our SMT system by implementing and combining two alternative routes to integrate subcategorization information from the syntax-semantic interface: (i) We regard the translation as a function of the source language input, and project the syntactic functions of the English nouns to their German translations in the SMT output. This subcategorization model is necessary when there are several plausible solutions for the syntactic functions of a noun in combination with a verb. For example, both Mitarbeiter and Kollege are plausible subjects and direct objects of the verb geben, so the information about these nouns’ roles in the input sentence allows for disambiguation. (ii) The case of an NP is derived from an external knowledge base comprising quantitative, dependency-based information about German verb subcategorization frames and noun modification. The verb subcategorization information is not restricted to syntactic noun functions but models association strength for verb– noun pairs with regard to the entire subcategorization frame plus the syntactic functions of the nouns. For example, the database can tell us that while the verb geben is very likely to subcategorize a ditransitive frame, the verb zustimmen is very likely to subcategorize only a direct object, next to the obligatory subject (subcat frame prediction). Furthermore, we can retrieve the information that the noun Bericht is less likely to appear as subject of geben than the nouns Mitarbeiter and Kollege (verb–noun subcat case prediction). And we can look up that the noun Aktion is very unlikely to be a genitive modification of Regierung (cf. table 1), while Kollege is a plausible genitive modification of Bericht (noun–noun modification case prediction, cf. example (2)). In summary, model (i) applies when there are no obvious preferences concerning verb–noun subcategorization or noun–noun modification. Model (ii) predicts case relying on the subcategorization and modification preferences. The combination of our two models approaches a simplified level of semantic role definition but only relies on dependency information that is considerably easier and cheaper to define and obtain than a very high quality semantic parser and/or a corpus annotated with semantic role information. Integrating semantic role information into SMT has been demonstrated by various researchers to improve translation quality (cf. Wu and Fung (2009a), Wu and Fung (2009b), Liu and Gildea (2008), Liu and Gildea (2010)). Our approach is in line with 594 Wu and Fung (2009b) who demonstrated that on the one hand 84% of verb syntactic functions in a 50-sentence test corpus projected from Chinese to English, and that on the other hand about 15% of the subjects were not translated into subjects, but their semantic roles were preserved across language. These two findings correspond to the expected uses of our models (i) and (ii), respectively. 2 Previous work Previous work has already introduced the idea of generating inflected forms as a post-processing step for a translation system that has been stripped of (most) target-language-specific features. Toutanova et al. (2008) and Jeong et al. (2010) built translation systems that predict inflected word forms based on a large array of morphological and syntactic features, obtained from both source and target side. Kholy and Habash (2012) and Green and DeNero (2012) work on English to Arabic translation and model gender, number and definiteness, focusing primarily on improving fluency. Fraser et al. (2012) used a phrase-based system to transfer stems and generated inflected forms based on the stems and their morphological features. For case prediction, they trained a CRF with access to lemmas and POS-tags within a given window. We re-implemented the system by Fraser et al. as a hierarchical machine translation system using a string-to-tree setup. In contrast to the flat phrase-based setting of Fraser et al. (2012), syntactic trees on the SMT output allow us to work with verb–noun structures, which are relevant for case prediction. While the CRF used for case prediction in Fraser et al. (2012) has access to lexical information, it is limited to a certain window size and has no direct information about the relation of verb–noun pairs occurring in the sentence. Using a window of a limited size is particularly problematic for German, as there can be large gaps between the verb and its subcategorized nouns; introducing information about the relation of verbs and nouns helps to bridge such gaps. Furthermore, that model was not able to make effective use of source-side features. One of the objectives of using an inflection prediction model is morphologically well-formed output. Kirchhoff et al. (2012) evaluated user reactions to different error types in machine translation and came to the result that morphological well-formedness has only a marginal impact on the comprehensibility of SMT output in the case of English-Spanish translation. As already discussed, German case is essential to the meaning of the sentence, so this result will not hold for German output. 3 Translation pipeline This section presents an overview of our two-step translation process. In the first step, English input is translated to German stems. In the second step, morphological features are predicted and inflected forms are generated based on the word stems and the morphological features. In subsections 3.1 to 3.4, we present the simple version of the inflection prediction system; our new features are described in sections 4.2 and 4.3. 3.1 Stemmed representation/feature markup We first parse the German side of the parallel training data with BitPar (Schmid, 2004). This maps each surface form appearing in normal text to a stem and morphological features (case, gender, number). We use this representation to create the stemmed representation for training the translation model. With the exception of stem-markup (discussed below), all morphological features are removed from the stemmed representation. The stem markup is used as part of the input to the feature prediction; the basic idea is that the given feature values are picked up by the prediction model and then propagated over the phrase. Nouns, as the head of NPs and PPs, are annotated with gender and number. We consider gender as part of the stem, whereas the value for number is derived from the source-side: if marked for number, singular/plural nouns are distinguished during word alignment and then translated accordingly. Prepositions are also annotated with case; many prepositions are restricted to only one case, some are ambiguous and allow for either dative or accusative. Other words which are subject to feature prediction (e.g. adjectives, articles) are reduced to their stems with no feature markup, as are all remaining words. As sole exception, we keep the inflected forms of verbs (verbal inflection is not modelled). In addition to the translation model, the target-side language model, as well as the reference data for parameter tuning use this representation. 595 3.2 Building a stemmed translation model We use a hierarchical translation system. Instead of translating phrases, a hierarchical system extracts translation rules (Galley et al., 2004) which allow the decoder to provide a tree spanning over the translated sentence. In order to avoid sparsity during rule extraction, we use a string-to-tree setup, where only the target-side part of the data is parsed. Translation rules are of the following form: [X]1 allows [X]2 → [NP]1 [NP]2 erlaubt [X]1 allows [X]2 → [NP]1 erlaubt [NP]2 This example illustrates how rules can cover the different word ordering possibilities in German. PP nodes are annotated with their respective case, as well as with the lemma of the preposition they contain. In our experiments, this enriched annotation has small improvements over the simpler setting with only head categories (details omitted). This outcome, in particular that adding the lemma of the preposition to the PP node helps to improve translation quality, has been observed before in tree restructuring work for improving translation (Huang and Knight, 2006). 3.3 Feature prediction and generation of inflected forms In this section we discuss our focus, which is prediction of case, but also the prediction of number, gender and strong/weak adjectival inflection. The latter feature is German-specific; its values2 (strong/weak) depend on the combination of the other features, as well as on the type of determiner (e.g. definite/indefinite/none). Morphological features are predicted on four separate CRF models, one for each feature. The models for case, number and gender are independent of another, whereas the model for adjectival inflection requires information about these features, and is thus the last one to be computed, taking the output of the 3 other models as part of its input. In contrast, the adjectival inflection model in Fraser et al. (2012) is independent from the other features. Each model has access to stems, POS-tags and the feature to be modelled within a window of four positions to the right and the left of the current position3. 2Note that the values for strong/weak inflection are not always the same over the phrase, but follow a certain pattern depending on the settings of case, number and gender. 3Preliminary experiments showed that larger windows do not improve translation quality. Table 2 illustrates the different steps of the inflection process: the markup (number and gender on nouns) in the stemmed output of the SMT system is part of the input to the respective feature prediction. For gender and number, the values given on the stems of the nouns are then propagated over the phrase. While the case of prepositional phrases is determined by the case annotation on prepositions, the case of nominal phrases is computed only based on the respective contexts. After predicting all morphological features, the information required to generate inflected forms is complete: based on the stems and the features, we use the morphological tool SMOR (Schmid et al., 2004) for the generation of inflected forms. One general problem with feature-prediction is that the ill-formed SMT output is not well represented by the training data which consists of wellformed sentences. This problem was also mentioned by Stymne and Cancedda (2011) and Kholy and Habash (2012). They deal with this problem by translating the training data and annotating it with the respective features, and then adding this new data set to the original training data. As this method comes with its own problems, such as transferring the morphological annotation to not necessarily isomorphically translated text, we do not use translated data as part of the training data. Instead, we limit the power of the CRF model through experimenting with the removal of features, until we had a system that was robust to this problem. 3.4 Dealing with word formation issues To reduce data sparsity, we split portmanteau prepositions. Portmanteaus are compounds of prepositions and articles, e.g. zur = zu der (to the). Being components of nominal phrases, they have to agree in all morphological features with the rest of the phrase. As only some combinations of articles and prepositions can form a portmanteau, the decision of whether to merge prepositions and articles is made after feature prediction. Since our focus is case prediction, we do not do special modelling of German compounds. 4 Using subcategorization information Within the area of (automatic) lexical acquisition, the definition of lexical verb information has been a major focus, because verbs play a central role for the structure and the meaning of sentences and 596 SMT output predicted features inflected forms gloss beeinflussen<VVFIN> – beeinflussen influence d<ART> Fem.Acc.Sg.St die the politisch<ADJ> Fem.Acc.Sg.Wk politische political Stabilit¨at<NN><Fem><Sg> Fem.Acc.Sg.Wk Stabilit¨at stability Table 2: Overview of the inflection process: the stem markup is highlighted in the SMT output. discourse. On the one hand, this has led to a range of manually or semi-automatically developed lexical resources focusing on verb information, such as the Levin classes (Levin, 1993), VerbNet (Kipper Schuler, 2006), FrameNet4 (Fillmore et al., 2003), and PropBank (Palmer et al., 2005). On the other hand, we find automatic approaches to the induction of verb subcategorization information at the syntax-semantics interface for a large number of languages, e.g. Briscoe and Carroll (1997) for English; Sarkar and Zeman (2000) for Czech; Schulte im Walde (2002a) for German; Messiant (2008) for French. This basic kind of verb knowledge has been shown to be useful in many NLP tasks such as information extraction (Surdeanu et al., 2003; Venturi1 et al., 2009), parsing (Carroll et al., 1998; Carroll and Fang, 2004) and word sense disambiguation (Kohomban and Lee, 2005; McCarthy et al., 2007). 4.1 Extracting subcategorization information As described in the introductory section, we make use of two5 major kinds of subcategorization information. Verb–noun tuples referring to specific syntactic functions within verb subcategorization (verb–noun subcat case prediction) are integrated with an associated probability for accusative (direct object), dative (indirect object) and nominative (subject).6 Further to the subject and object noun phrases, the subcategorization information provides quantitative triples for verb–preposition–noun pairs, thus predicting the case of NPs within prepositional phrases (we do this only when the prepositions are ambiguious, i.e., they could subcategorize either a dative or an accusative NP). In addition to modelling subcategorization information, it is also important to differentiate between subcategorized noun phrases (such as object or subject), and noun phrases 4Even though the FrameNets approach does not only include knowledge about verbal predicates, the actual lexicons are skewed towards verb behaviour. 5The third kind of information, subcat frame prediction is implicit, since verb–noun tuples rely on specific frames. 6Genitive objects can also occur in German verb subcategorization frames, but this is extremely rare and verb-specific and thus not considered in our model. V-SUBJ V-OBJAcc V-OBJDat EP 454,350 332,847 53,711 HGC 712,717 329,830 160,377 Both 1,089,492 607,541 206,764 Table 3: Number of verb-noun types extracted from Europarl (EP) and newspaper data (HGC). that modify nouns (noun–noun modification case prediction). Typically, these NP modifiers are genitive NPs. To this end, we integrate nounnounGen tuples with their respective frequencies. These preferences for a certain function (i.e. subject, object or modifier) are passed on to the system at the level of nouns and integrated into the CRF through the derived probabilities. The tuples and triples are obtained from dependency-parsed data by extracting all occurrences of the respective relations; table 3 gives an overview of the number of extracted tuple types. For the subcategorization information, the verbnoun tuples (verb-subject, verb-objectAcc, verbobjectDat) are then grouped as follows: tuple gloss Acc Dat Nom SchemaN folgenV pattern follow 0 322 19 We compute the probabilities for the verb-noun tuple to occur in the respective functions based on the relative frequencies. In the case of SchemaN folgenV , we find that the function of Schema as dative object is predominant (to follow a pattern), but it can also occur in the subject position (the pattern follows). The fact that two functions are possible for this noun are reflected in their probabilities. The probabilities are discretized into 5 buckets (Bp=0, B0<p≤0.25, B0.25<p≤0.5, B0.5<p≤0.75, B0.75<p≤1). In contrast, noun modification in noun-nounGen construction is represented by cooccurrence frequencies.7 7The frequencies are bucketed to the powers of ten, i.e. f = 1, 2 ≤f ≤10, 11 ≤f ≤100 , etc. and also f = 0: this representation allows for a more fine-grained distinction in the low-to-mid frequency range, providing a good basis for the decision of whether a given noun-noun pair is a true noun-nounGen structure or just a random co-occurrence of two nouns. 597 Gloss Stem Tag Acc Dat Nom Verb Gen N1 Gold 1 companies Unternehmen<NN> NN 0.00 0.00 1.00 erhalten – – Nom 2 should sollten<VVFIN> VVFIN – – – – – – – 3 financial finanziell<ADJ> ADJ – – – – – – Acc 4 funding Mittel<NN> NN 1.00 0.00 0.00 erhalten – – Acc 5 for f¨ur APPR<Acc> PRP – – – – – – – 6 the d<ART> ART – – – – – – Acc 7 introduction Einf¨uhrung<NN> NN – – – – – – Acc 8 new neu<ADJ> ADJ – – – – – – Gen 9 technologies Technologie<NN> NN – – – – 100 Einf¨uhrung<NN> Gen 10 obtain erhalten<VVINF> VVINF – – – – – – – Table 4: Adding subcategorization information into SMT output. (EN input: companies should obtain financial funding for the introduction of new technologies). On the right, the correct labels are given. 4.2 Integrating subcategorization knowledge There are two possibilities to integrate subcategorization information into the case prediction model: (i) It can be integrated into the data set using the tree-structure provided by the decoder. Here, verb-noun tuples are extracted from VP and S structures, and then the probabilities for the different functions are looked up. Similarly, for two adjacent NPs, the occurrence frequencies of the respective two nouns are looked up in the list of noun-nounGen constructions. (ii) The subcategorization information can be integrated based on the verb-noun tuples obtained by using tuples obtained from source-side dependencies. The classification task of the CRF consists in predicting a sequence of labels: case values for NPs/PPs or no value otherwise, cf. table 4. The model has access to the basic features stem and tag, as well as the new features based on subcategorizaion information (explained below), using unigrams within a window of up to four positions to the right and the left of the current position, as well as bigrams and trigrams for stems and tags (current item + left and/or right item). An example for integrating subcategorization features is given in table 4. The first word Unternehmen (companies) is annotated as subject of erhalten (obtain) with probability 1, and Mittel (funding) is annotated as direct object of erhalten with probability 1. The word Technologie (technology) has been marked as a candidate for a genitive in a noun-nounGen construction8; the co-occurrence frequency of the tuple Einf¨uhrungTechnologie (introduction - technology) lies in the bucket 11. . . 100. In addition to the probability/frequency of the respective functions, we also provide the CRF with bigrams containing the two parts of the tuple, 8There is no annotation on Einf¨uhrung as the preposition f¨ur is always in accusative case. DE stemmed output warum<PWAV> die<ART> Regierung<NN><Sg><Fem> die<ART> anhaltend<ADJ> militärisch<ADJ> Aktion<NN><Pl><Fem> angeordnet<VVFIN> derived features SUBJ V:anordnen OBJ V:anordnen SUBJ OBJ EN input why the government ordered the ongoing military actions Figure 1: Deriving features from dependencyparsed English data via the word alignment. i.e. verb+noun or the two nouns of possible nounnounGen constructions. As can be seen in the example in table 4, the subject (line 1) and the verb (line 10) are far apart from each other. By providing the parts of the tuple as unigrams, bigrams or trigrams to the CRF, all relevant information is available: verb, noun and the probabilities for the potential functions of the noun in the sentence. In addition to bridging the long distance between verbs and subcategorized nouns, a very common problem for German, this type of precise information also helps to close the gap between the wellformed training data and the broken SMT-output as it replaces to a certain extent the target-language context information (n-grams of stems or lemmas within a small window). 4.3 Integrating source-side features For predicting case in SMT output, information about an NP’s function in the input sentence is essential. Syntax-semantic functions can be isomorphic (e.g., English subjects and objects may have the same function in a German translation), but this is not necessarily the case. Despite this, an important advantage of integrating source-side features is that the well-formed source-side text can be reliably parsed, whereas SMT output is often disfluent and cannot be reliably parsed. The English features are obtained from dependency-parsed data (Choi and Palmer, 2012). The relevant annotation of the parser is transferred 598 to the SMT output via word alignment. We focus on English subjects, direct objects and noun-ofnoun structures (often equivalent to noun-nounGen phrases on the German side): these structures are generally likely to correspond to each other within source and target language. In contrast to the subcategorization-based information, the difference between well-formed training data and disfluent SMT output tends to work to our benefit here: while the parallel sentences of the training data were manually translated with the objective to produce good target-language sentences, the syntactic structures of the source and target sentences are often diverging. In contrast, the SMT system often produces more isomorphic translations, which is helpful for annotating source-side features on the target language. Figure 1 shows the process of integrating source-side features: for each German noun that is aligned with an English noun labelled as subject or direct object, this annotation is transferred to the target-side. Using the English dependency structures, the verb subcategorizing the respective noun is identified, and via the alignment, the equivalent German verb is obtained. Similarly, candidates for noun-nounGen structures are identified by extracting and aligning English noun-of-noun phrases. 5 Experiments and evaluation In this section, we present experiments using different feature combinations. We also present a manual evaluation of our best system which shows that the new features improve translation quality. 5.1 Data and experimental setup We use the hierarchical translation system that comes with the Moses SMT-package and GIZA++ to compute the word alignment, using the “growdiag-final-and” heuristics. The rule table was computed with the default parameter setting for GHKM extraction (Galley et al., 2004) in the implementation by Williams and Koehn (2012). Our training data contains 1,485,059 parallel sentences9; the German part of the parallel data is used as the target-side language model. The dev and test sets (1025/1026 lines) are wmt-2009-a/b. For predicting the grammatical features, we used the Wapiti Toolkit (Lavergne et al., 2010).10 9English/German data released for the 2009 ACL Workshop on Machine Translation shared task. 10To eliminate irrelevant features, we use L1 regularizaWe train four CRFs on data prepared as shown in section 3. The corpora used for the extraction of subcategorization tuples were Europarl and German newspaper data (200 million words). We choose this particular data combination in order to provide data that matches the training data, as well as to add new data of the test set’s domain (news). The German part of Europarl was dependencyparsed with Bohnet (2010), and subcategorization information was extracted as described in Scheible et al. (2013); the newspaper data (HGC - Huge German Corpus) was parsed with Schmid (2000), and subcategorization information was extracted as described in Schulte im Walde (2002b). 5.2 Results We report results of two types of systems (table 5): first, a regular translation system built on surface forms (i.e., normal text) and second, four inflection prediction systems. The first inflection prediction system (1) uses a simple case prediction model, whereas the remaining systems are enriched with (2) subcategorization information (cf. section 4.2), (3) source-side features (cf. section 4.3), and (4) both source-side features and subcategorization information. In (2) and (4), the subcategorization information was included using tuples obtained from source-side dependencies11. The simple prediction system corresponds to that presented in section 3; for all inflection prediction systems, the same SMT output and models for number, gender and strong/weak inflection were used; thus the only difference with the simple prediction system is the model for case prediction. We present three types of evaluation: BLEU scores (Papineni et al., 2001), prediction accuracy on clean data and a manual evaluation of the best system in section 5.3. Table 5 gives results in case-insensitive BLEU. While the inflection prediction systems (1-4) are significantly12 better than the surface-form system (0), the different versions of the inflection systems are not distinguishable in terms of BLEU; however, our manual evaluation shows that the new features have a positive impact on translation quality. tion; the regularization parameter is optimized on held out data. 11Using tuples extracted from the target-side parse tree (produced by the decoder) results in a BLEU score of 14.00. 12We used Kevin Gimpel’s implementation of pairwise bootstrap resampling with 1000 samples. 599 0 1 2 3 4 surface simple subcat. features source-side source-side system prediction (tuples from EN side) features + subcat. featues BLEU 13.43 14.02 14.05 14.10 14.17 Clean – 85.05 % 85.65 % 85.61 % 85.81 % Table 5: Results of the simple prediction vs. three systems enriched with extra features. One problem with using BLEU as an evaluation metric is that it is a precision-oriented metric and tends to reward fluency rather than adequacy (see (Wu and Fung, 2009a; Liu and Gildea, 2010)). As we are working on improving adequacy, this will not be fully reflected by BLEU. Furthermore, not all components of an NP do necessarily change their inflection with a new case value; it might happen that the only indicator for the case of an NP is the determiner: er sieht [den alten Mann]NPacc (he sees the old man) vs. er folgt [dem alten Mann]NPdat (he follows the old man). While the case marking of NPs is essential for comprehensibility, one changed word per noun phrase is hardly enough to be reflected by BLEU. An alternative to study the effectiveness of the case prediction model is to evaluate the prediction accuracy on parsed clean data, i.e. not on SMT output. In this case, we measure (using the dev set) how often the case of an NP is predicted correctly13. In all cases, the prediction accuracy is better for the enriched systems. This shows that the additional features improve the model, but also that a gain in prediction accuracy on clean data is not necessarily related to a gain in BLEU. We observed that the more complex the model, the less robust it is to differences between the test data and the training data. Related to this problem, we observed that high-order n-gram POS/lemmabased features in the simple prediction (sequences of lemmas and tags) are given too much weight in training and thus make it difficult for the new features to have a larger impact, so we restricted the n-gram order of this type of feature to trigrams. 5.3 Manual evaluation of the best system In order to provide a better understanding of the impact of the presented features, in particular to see whether there is an improvement in adequacy, we carried out a manual evaluation comparing sys13The numbers in table 5 are artificially high and downplay the difference as they also include cases which are very easy to predict, such as nouns in PPs where only one value for case is possible. We measure how many case labels were correctly predicted, not correct inflected forms. enriched simple equal preferred preferred person 1 23 11 12 (a) person 2 21 8 17 person 3 26 11 9 person 1 23 5 18 (b) person 2 21 11 14 person 3 29 8 9 (c) agreement 17 2 6 Table 6: Manual evaluation of 46 sentences: without (a) and with (b) access to EN input, and the annotators’ agreement in the second part (c). tem (4) with the simple prediction system (1). From the set of different sentences between the simple prediction system and the enriched system (144 of 1026), we evaluated those where the English input sentence was between 8 and 25 words long (46 sentences in total). We specifically restricted the test set in order to provide sentences which are less difficult to annotate, as longer sentences are often very disfluent and too hard to rate. Most of the sentences in the evaluation set differ only in the realization of one NP. For comparing the two systems, the sentences were presented in random order to 3 native speakers of German. The evaluation consists of two parts: first, the participants were asked to decide which sentence is better without being given the English input (this measures fluency). In the second part, they should to mark that sentence which better reproduces the content of the English input sentence (this measures adequacy). The test set is the same for both tasks, the only difference being that the English input is given in the second part. The results are given in table 6. Summarizing we can say that the participants prefer the enriched system over the simple system in both parts; there is a high agreement (17 cases) in decisions over those sentences which were rated as enriched better. When looking at the pairwise inter-annotator agreement for the task of annotating the test-set with the 3 possible labels enriched preferred, simple preferred and no preference, we find that the annotators P1 and P2 have a substantial agreement 600 input hundreds of policemen were on alert , and [a helicopter]Subj circled the area with searchlights . 1 simple Hunderte von Polizisten auf Trab , und [einen Helikopter]Acc eingekreist das Gebiet mit searchlights . enriched Hunderte von Polizisten auf Trab , und [ein Helikopter]Nom eingekreist das Gebiet mit searchlights . input while 38 %percent put [their trust]Obj in viktor orb´an . 2 simple w¨ahrend 38 % [ihres Vertrauens]Gen schenken in Viktor Orb´an . enriched w¨ahrend 38 % [ihr Vertrauen]Acc schenken in Viktor Orb´an . input more than $ 100 billion will enter [the monetary markets]Obj by means of public sales . 3 simple mehr als 100 Milliarden Dollar werden durch ¨offentlichen Verkauf [der Geldm¨arkte]Gen treten . enriched mehr als 100 Milliarden Dollar werden durch ¨offentlichen Verkauf [die Geldm¨arkte]Acc treten . Table 7: Output from the simple system (1) and the enriched system (4). in terms of Kappa (κ = 0.6184), whereas the agreement of P3 with P1/P2 respectively leads to lower scores (κ = 0.4467 and κ = 0.3596). However, the annotators tend to agree well on sentences with the label enriched preferred, but largely disagree on sentences labelled as either simple preferred or no preference. The number of decisions where all three annotators agree on a label when given the English input is listed in table 6(c): for example, only two sentences were given the label baseline is better by all three annotators. This outcome shows how difficult it is to rate disfluent SMT output. For evaluating the case prediction system, the distinction between enriched preferred and enriched dispreferred is the most important question to answer. Redefining the annotation task to annotating only two values by grouping the labels simple preferred and no preference into one annotation possibility leads to κ = 0.7391, κ = 0.4048 and κ = 0.5652. 5.4 Examples Table 7 shows some examples for output from the simple system and the system using source-side and subcategorization features. In the first sentence, the subject NP a helicopter was inflected as a direct object in the simple system, but as a subject in the enriched system, which was preferred by all three annotators. In the second sentence, the NP their trust, i.e. a direct object of put, was incorrectly predicted as genitive-modifier of 38 % (i.e. 38 % of their trust) in the simple system. The enriched system made use of the preference for accusative for the pair Vertrauen schenken (place trust), correctly inflecting this NP as direct object. Interestingly, only two annotators preferred the enriched system, whereas one was undecided. The third sentence illustrates how difficult it is to rate case marking on disfluent SMT output: there are two possibilities to translate enter the money market; the direct equivalent of the English phrase (den GeldmarktAcc betreten), or via the use of a prepositional phrase (auf den GeldmarktAcc treten: “to step into the money market”). The SMT-output contains a mix of both, i.e. the verb treten (instead of betreten), but without the preposition, which cannot lead to a fully correct inflection. While the inflection of the simple system (a genitive construction meaning the public sales of the money market) is definitely wrong, the inflection obtained in the enriched system is not useful either, due to the structure of the translation14. This difficulty is also reflected by the annotators, who gave twice the label no preference and once the label enriched better. 6 Conclusion We illustrated the necessity of using external knowledge sources like subcategorization information for modelling case for English to German translation. We presented a translation system making use of a subcategorization database together with source-side features. Our method is language-independent with regard to the source language; furthermore, no language-specific highquality semantic annotation is needed for the target language, but the data required to model the subcategorization preferences can be obtained using standard NLP techniques. We showed in a manual evaluation that the proposed features have a positive impact on translation quality. Acknowledgements This work was funded by the DFG Research Project Distributional Approaches to Semantic Relatedness (Marion Weller), the DFG Heisenberg Fellowship SCHU-2580/1-1 (Sabine Schulte im Walde), as well as by the Deutsche Forschungsgemeinschaft grant Models of Morphosyntax for Statistical Machine Translation (Alexander Fraser). 14Furthermore, with treten being polysemous, die Geldm¨arkte treten can also mean to kick the money markets. 601 References Bernd Bohnet. 2010. Top Accuracy and Fast Dependency Parsing is not a Contradiction. In Proceedings of the 23rd International Conference on Computational Linguistics (COLING) 2010, pages 89– 97, Beijing, August. Ted Briscoe and John Carroll. 1997. Automatic Extraction of Subcategorization from Corpora. In Proceedings of the 5th ACL Conference on Applied Natural Language Processing, pages 356–363, Washington, DC. John Carroll and Alex C. Fang. 2004. The Automatic Acquisition of Verb Subcategorisations and their Impact on the Performance of an HPSG Parser. In Proceedings of the 1st International Joint Conference on Natural Language Processing, pages 107– 114, Sanya City, China. John Carroll, Guido Minnen, and Ted Briscoe. 1998. Can Subcategorisation Probabilities Help a Statistical Parser? In Proceedings of the 6th ACL/SIGDAT Workshop on Very Large Corpora, Montreal, Canada. Jinho D. Choi and Martha Palmer. 2012. Getting the Most out of Transition-Based Dependency Parsing. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Charles J. Fillmore, Christopher R. Johnson, and Miriam R.L. Petruck. 2003. Background to FrameNet. International Journal of Lexicography, 16:235–250. Alexander Fraser, Marion Weller, Aoife Cahill, and Fabienne Cap. 2012. Modeling Inflection and WordFormation in SMT. In Proceedings of the the European Chapter of the Association for Computational Linguistics (EACL), Avignon, France. Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. What’s in a Translation Rule? In Proceedings of the Human Language Technology and North American Association for Computational Linguistics Conference (HLT-NAACL). Spence Green and John DeNero. 2012. A ClassBased Agreement Model for Generating Accurately Inflected Translations. pages 146–155. Bryant Huang and Kevin Knight. 2006. Relabeling Syntax Trees to Improve Syntax-Based Machine Translation Quality. In Proceedings of the Human Language Technology Conference of the North American Chapter of the ACL. Minwoo Jeong, Kristina Toutanova, Hisami Suzuki, and Chris Quirk. 2010. A Discriminative Lexicon Model for Complex Morphology. In Proceedings of the Ninth Conference of the Association for Machine Translation in the Americas (AMTA 2010). Ahmed El Kholy and Nizar Habash. 2012. Translate, Predict or Generate: Modeling Rich Morphology in Statistical Machine Translation. In European Association for Machine Translation. Karin Kipper Schuler. 2006. VerbNet: A BroadCoverage, Comprehensive Verb Lexicon. Ph.D. thesis, University of Pennsylvania, Computer and Information Science. Katrin Kirchhoff, Daniel Capurro, and Anne Turner. 2012. Evaluating User Preferences in Machine Translation Using Conjoint Analysis. In European Association for Machine Translation. Upali S. Kohomban and Wee Sun Lee. 2005. Learning Semantic Classes for Word Sense Disambiguation. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 34–41, Ann Arbor, MI. Thomas Lavergne, Olivier Capp´e, and Franc¸ois Yvon. 2010. Practical very large scale CRFs. In Proceedings the 48th Annual Meeting of the Association for Computational Linguistics (ACL), pages 504–513. Association for Computational Linguistics, July. Beth Levin. 1993. English Verb Classes and Alternations. The University of Chicago Press. Ding Liu and Daniel Gildea. 2008. Improved Treeto-String Transducers for Machine Translation. In ACL Workshop on Statistical Machine Translation. Ding Liu and Daniel Gildea. 2010. Semantic Role Features for Machine Translation. In Proceedings of the 23rd International Conference on Computational Linguistics (COLING) 2010. Diana McCarthy, Rob Koeling, Julie Weeds, and John Carroll. 2007. Unsupervised Acquisition of Predominant Word Senses. Computational Linguistics, 33(4):553–590. C´edric Messiant. 2008. A Subcategorization Acquisition System for French Verbs. In Proceedings of the Student Research Workshop at the 46th Annual Meeting of the Association for Computational Linguistics, pages 55–60, Columbus, OH. Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The Proposition Bank: An annotated Resource of Semantic Roles. Computational Linguistics, 31(1):71–106. Kishore A. Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2001. BLEU: a Method for Automatic Evaluation of Machine Translation. Technical Report RC22176 (W0109-022), IBM Research Division, Thomas J. Watson Research Center. Anoop Sarkar and Daniel Zeman. 2000. Automatic Extraction of Subcategorization Frames for Czech. In Proceedings of the 18th International Conference on Computational Linguistics, Saarbr¨ucken, Germany. 602 Silke Scheible, Sabine Schulte im Walde, Marion Weller, and Max Kisselew. 2013. A Compact but Linguistically Detailed Database for German Verb Subcategorisation relying on Dependency Parses from a Web Corpus. In Proceedings of the 8th Web as Corpus Workshop, Lancaster, UK. To appear. Helmut Schmid, Arne Fitschen, and Ulrich Heid. 2004. SMOR: a German Computational Morphology Covering Derivation, Composition, and Inflection. In Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC). Helmut Schmid. 2000. LoPar: Design and Implementation. Arbeitspapiere des Sonderforschungsbereichs 340 ‘Linguistic Theory and the Foundations of Computational Linguistics’ 149, Institut f¨ur Maschinelle Sprachverarbeitung, Universit¨at Stuttgart. Helmut Schmid. 2004. Efficient Parsing of Highly Ambiguous Context-Free Grammars with Bit Vectors. Sabine Schulte im Walde. 2002a. A Subcategorisation Lexicon for German Verbs induced from a Lexicalised PCFG. In Proceedings of the 3rd Conference on Language Resources and Evaluation, volume IV, pages 1351–1357, Las Palmas de Gran Canaria, Spain. Sabine Schulte im Walde. 2002b. A Subcategorisation Lexicon for German Verbs induced from a Lexicalised PCFG. In Proceedings of the 3rd Conference on Language Resources and Evaluation, volume IV, pages 1351–1357, Las Palmas de Gran Canaria, Spain. Sara Stymne and Nicola Cancedda. 2011. Productive Generation of Compound Words in Statistical Machine Translation. In Proceedings of the Sixth Workshop on Machine Translation. Mihai Surdeanu, Sanda Harabagiu, John Williams, and Paul Aarseth. 2003. Using Predicate-Argument Structures for Information Extraction. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 8–15, Sapporo, Japan. Kristina Toutanova, Hisami Suzuki, and Achim Ruopp. 2008. Applying Morphology Generation Models to Machine Translation. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics (ACL): Human Language Technologies. Giulia Venturi1, Simonetta Montemagni, Simone Marchi, Yutaka Sasaki, Paul Thompson, John McNaught, and Sophia Ananiadou. 2009. Bootstrapping a Verb Lexicon for Biomedical Information Extraction. In Alexander Gelbukh, editor, Linguistics and Intelligent Text Processing, pages 137–148. Springer, Heidelberg. Philip Williams and Phillipp Koehn. 2012. GHKMRule Extraction and Scope-3 Parsing in Moses. In Proceedings of the 7th Workshop on Statistical Machine Translation, ACL. Dekai Wu and Pascale Fung. 2009a. Can Semantic Role Labeling Improve SMT? In Proceedings of the 13th Annual Conference of the European Association for Machine Translation (EAMT). Dekai Wu and Pascale Fung. 2009b. Semantic Roles for SMT: A Hybrid two-pass Model. In Proceedings of the North American Chapter of the Association for Computational Linguistics and Human Language Technologies Conference (NAACL-HLT). 603
2013
58
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 604–614, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Name-aware Machine Translation Haibo Li† Jing Zheng‡ Heng Ji† Qi Li† Wen Wang‡ † Computer Science Department and Linguistics Department Queens College and Graduate Center, City University of New York New York, NY, USA 10016 {lihaibo.c, hengjicuny, liqiearth}@gmail.com ‡ Speech Technology & Research Laboratory SRI International Menlo Park, CA, USA 94025 {zj, wwang}@speech.sri.com Abstract We propose a Name-aware Machine Translation (MT) approach which can tightly integrate name processing into MT model, by jointly annotating parallel corpora, extracting name-aware translation grammar and rules, adding name phrase table and name translation driven decoding. Additionally, we also propose a new MT metric to appropriately evaluate the translation quality of informative words, by assigning different weights to different words according to their importance values in a document. Experiments on Chinese-English translation demonstrated the effectiveness of our approach on enhancing the quality of overall translation, name translation and word alignment over a high-quality MT baseline1. 1 Introduction A shrinking fraction of the world’s Web pages are written in English, therefore the ability to access pages across a range of languages is becoming increasingly important. This need can be addressed in part by cross-lingual information access tasks such as entity linking (McNamee et al., 2011; Cassidy et al., 2012), event extraction (Hakkani-Tur et al., 2007), slot filling (Snover et al., 2011) and question answering (Parton et al., 2009; Parton and McKeown, 2010). A key bottleneck of highquality cross-lingual information access lies in the performance of Machine Translation (MT). Traditional MT approaches focus on the fluency and accuracy of the overall translation but fall short in their ability to translate certain content words including critical information, especially names. 1Some of the resources and open source programs developed in this work are made freely available for research purpose at http://nlp.cs.qc.cuny.edu/NAMT.tgz A typical statistical MT system can only translate 60% person names correctly (Ji et al., 2009). Incorrect segmentation and translation of names which often carry central meanings of a sentence can also yield incorrect translation of long contexts. Names have been largely neglected in the prior MT research due to the following reasons: • The current dominant automatic MT scoring metrics (such as Bilingual Evaluation Understudy (BLEU) (Papineni et al., 2002)) treat all words equally, but names have relative low frequency in text (about 6% in newswire and only 3% in web documents) and thus are vastly outnumbered by function words and common nouns, etc.. • Name translations pose a greater complexity because the set of names is open and highly dynamic. It is also important to acknowledge that there are many fundamental differences between the translation of names and other tokens, depending on whether a name is rendered phonetically, semantically, or a mixture of both (Ji et al., 2009). • The artificial settings of assigning low weights to information translation (compared to overall word translation) in some largescale government evaluations have discouraged MT developers to spend time and explore resources to tackle this problem. We propose a novel Name-aware MT (NAMT) approach which can tightly integrate name processing into the training and decoding processes of an end-to-end MT pipeline, and a new name-aware metric to evaluate MT which can assign different weights to different tokens according to their importance values in a document. Compared to previous methods, the novel contributions of our approach are: 1. Tightly integrate joint bilingual name tagging into MT training by coordinating tagged 604 names in parallel corpora, updating word segmentation, word alignment and grammar extraction (Section 3.1). 2. Tightly integrate name tagging and translation into MT decoding via name-aware grammar (Section 3.2). 3. Optimize name translation and context translation simultaneously and conduct name translation driven decoding with language model (LM) based selection (Section 3.2). 4. Propose a new MT evaluation metric which can discriminate names and non-informative words (Section 4). 2 Baseline MT As our baseline, we apply a high-performing Chinese-English MT system (Zheng, 2008; Zheng et al., 2009) based on hierarchical phrase-based translation framework (Chiang, 2005). It is based on a weighted synchronous context-free grammar (SCFG). All SCFG rules are associated with a set of features that are used to compute derivation probabilities. The features include: • Relative frequency in two directions P(γ|α) and P(α|γ), estimating the likelihoods of one side of the rule r: X →< γ, α > translating into the other side, where γ and α are strings of terminals and non-terminals in the source side and target side. Non-terminals in γ and α are in one-to-one correspondence. • Lexical weights in two directions: Pw(γ|α) and Pw(α|γ), estimating likelihoods of words in one side of the rule r: X →< γ, α > translating into the other side (Koehn et al., 2003). • Phrase penalty: a penalty exp(1) for a rule with no non-terminal being used in derivation. • Rule penalty: a penalty exp(1) for a rule with at least one non-terminal being used in derivation. • Glue rule penalty: a penalty exp(1) if a glue rule used in derivation. • Translation length: number of words in translation output. Our previous work showed that combining multiple LMs trained from different sources can lead to significant improvement. The LM used for decoding is a log-linear combination of four word n-gram LMs which are built on different English corpora (details described in section 5.1), with the LM weights optimized on a development set and determined by minimum error rate training (MERT), to estimate the probability of a word given the preceding words. All four LMs were trained using modified Kneser-Ney smoothing algorithm (Chen and Goodman, 1996) and converted into Bloom filter LMs (Talbot and Brants, 2008) supporting memory map. The scaling factors for all features are optimized by minimum error rate training algorithm to maximize BLEU score (Och, 2003). Given an input sentence in the source language, translation into the target language is cast as a search problem, where the goal is to find the highest-probability derivation that generates the source-side sentence, using the rules in our SCFG. The source-side derivation corresponds to a synchronous targetside derivation and the terminal yield of this targetside derivation is the output of the system. We employ our CKY-style chart decoder, named SRInterp, to solve the search problem. 3 Name-aware MT We tightly integrate name processing into the above baseline to construct a NAMT model. Figure 1 depicts the general procedure. 3.1 Training This basic training process of NAMT requires us to apply a bilingual name tagger to annotate parallel training corpora. Traditional name tagging approaches for single languages cannot address this requirement because they were all built on data and resources which are specific to each language without using any cross-lingual features. In addition, due to separate decoding processes the results on parallel data may not be consistent across languages. We developed a bilingual joint name tagger (Li et al., 2012) based on conditional random fields that incorporates both monolingual and cross-lingual features and conducts joint inference, so that name tagging from two languages can mutually enhance each other and therefore inconsistent results can be corrected simultaneously. This joint name tagger achieved 86.3% bilingual pair F-measure with manual alignment and 84.4% bilingual pair F-measure with automatic alignment as reported in (Li et al., 2012). Given a parallel sentence pair we first apply Giza++ (Och and Ney, 2003) to align words, and apply this join605 Decoding Hierarchical Phrased-based MT Translated Text Translate Bi-text Data Source Text Joint Name Tagger Source Language Name Tagger Name Translator Training Name Pair Miner Extract source language names and add them to dictionaries for source language name tagger Extract name pairs and add them to translation dictionary Extract and add name pairs to phrase table GIZA++ Rule Extractor Extract SCFG rules with combination of name-replaced data and original bi-text data Replace names with non-terminals and combine with the original parallel data Figure 1: Architecture of Name-aware Machine Translation System. t bilingual name tagger to extract three types of names: (Person (PER), Organization (ORG) and Geo-political entities (GPE)) from both the source side and the target side. We pair two entities from two languages, if they have the same entity type and are mapped together by word alignment. We ignore two kinds of names: multi-word names with conflicting boundaries in two languages and names only identified in one side of a parallel sentence. We built a NAMT system from such nametagged parallel corpora. First, we replace tagged name pairs with their entity types, and then use Giza++ and symmetrization heuristics to regenerate word alignment. Since the name tags appear very frequently, the existence of such tags yields improvement in word alignment quality. The re-aligned parallel corpora are used to train our NAMT system based on SCFG. Since the joint name tagger ensures that each tagged source name has a corresponding translation on the target side (and vice versa), we can extract SCFG rules by treating the tagged names as non-terminals. However, the original parallel corpora contain many high-frequency names, which can already be handled well by the baseline MT. Some of these names carry special meanings that may influence translations of the neighboring words, and thus replacing them with non-terminals can lead to information loss and weaken the translation model. To address this issue, we merged the name-replaced parallel data with the original parallel data and extract grammars from the combined corpus. For example, given the following sentence pair: • -ý Íù e ¿› Ëe ‰åÉ ²• . • China appeals to world for non involvement in Angola conflict . after name tagging it becomes • GPE Íù e ¿› Ëe GPE ²• . • GPE appeals to world for non involvement in GPE conflict . Both sentence pairs are kept in the combined data to build the translation model. 3.2 Decoding During decoding phase, we extract names with the baseline monolingual name tagger described in (Li et al., 2012) from a source document. Its performance is comparable to the best reported results on Chinese name tagging on Automatic Content Extraction (ACE) data (Ji and Grishman, 2006; Florian et al., 2006; Zitouni and Florian, 2008; Nguyen et al., 2010). Then we apply a state-of-the-art name translation system (Ji et al., 2009) to translate names into the target language. The name translation system is composed of the following steps: (1) Dictionary matching based on 150,041 name translation pairs; (2) Statistical name transliteration based on a structured perceptron model and a character based MT model (Dayne and Shahram, 2007); (3) Context information extraction based re-ranking. In our NAMT framework, we add the following extensions to name translation. We developed a name origin classifier based on Chinese last name list (446 name characters) and name structure parsing features to distinguish Chinese person names and foreign person names (Ji, 2009), so that pinyin conversion is applied for Chinese names while name transliteration is applied only for foreign names. This classifier works reasonably well in most cases (about 92% classification accuracy), except when a common Chinese last name appears as the first character of a foreign 606 name, such as “1‰” which can be translated either as “Jolie” or “Zhu Li”. For those names with fewer than five instances in the training data, we use the name translation system to provide translations; for the rest of the names, we leave them to the baseline MT model to handle. The joint bilingual name tagger was also exploited to mine bilingual name translation pairs from parallel training corpora. The mapping score between a Chinese name and an English name was computed by the number of aligned tokens. A name pair is extracted if the mapping score is the highest among all combinations and the name types on both sides are identical. It is necessary to incorporate word alignment as additional constraints because the order of names is often changed after translation. Finally, the extracted 9,963 unique name translation pairs were also used to create an additional name phrase table for NAMT. Manual evaluation on 2,000 name pairs showed the accuracy is 86%. The non-terminals in SCFG rules are rewritten to the extracted names during decoding, therefore allow unseen names in the test data to be translated. Finally, based on LMs, our decoder exploits the dynamically created phrase table from name translation, competing with originally extracted rules, to find the best translation for the input sentence. 4 Name-aware MT Evaluation Traditional MT evaluation metrics such as BLEU (Papineni et al., 2002) and Translation Edit Rate (TER) (Snover et al., 2006) assign the same weights to all tokens equally. For example, incorrect translations of “the” and “Bush” will receive the same penalty. However, for crosslingual information processing applications, we should acknowledge that certain informationally critical words are more important than other common words. In order to properly evaluate the translation quality of NAMT methods, we propose to modify the BLEU metric so that they can dynamically assign more weights to names during evaluation. BLEU considers the correspondence between a system translation and a human translation: BLEU = BP · exp  N X n=1 wn log pn  (1) where BP is brevity penalty defined as follows: BP = ( 1 if c > r, e(1−r/c) if c ≤r. (2) where wn is a set of positive weights summing to one and usually uniformly set as wn = 1/N, c is the length of the system translation and r is the length of reference translation, and pn is modified n-gram precision defined as: pn = P C∈Candidates P n-gram∈C Countclip(n-gram) P C′∈Candidates P n-gram′∈C′ Countclip(n-gram′) (3) where C and C′ are translation candidates in the candidate sentence set, if a source sentence is translated to many candidate sentences. As in BLEU metric, we first count the maximum number of times an n-gram occurs in any single reference translation. The total count of each candidate n-gram is clipped at sentence level by its maximum reference count. Then we add up the weights of clipped n-grams and divide them by the total weight of all n-grams. Based on BLEU score, we design a name-aware BLEU metric as follows. Depending on whether a token t is contained in a name in reference translation, we assign a weight weightt to t as follows: weightt = ( 1 −e−tf(t,d)·idf(t,D), if t never appears in names 1 + P E Z , if t occurs in name(s) (4) where PE is the sum of penalties of non-name tokens and Z is the number of tokens within all names: PE = X t never appears in names e−tf(t,d)·idf(t,D) (5) In this paper, the tf · idf score is computed at sentence level, therefore, D is the sentence set and each d ∈D is a sentence. The weight of an n-gram in reference translation is the sum of weights of all tokens it contains. weightngram = X t∈ngram weightt (6) Next, we compute the weighted modified ngram precision Countweight−clip(n-gram) as follows: Countweight−clip(n-gram) = X if the ngrami is correctly translated weightngrami (7) 607 The Countclip(n-gram) in the equation 3 is substituted with above Countweight−clip(n-gram). When we sum up the total weight of all n-grams of a candidate translation, some n-grams may contain tokens which do not exist in reference translation. We assign the lowest weight of tokens in reference translation to these rare tokens. We also add an item, name penalty NP, to penalize the output sentences which contain too many or too few names: NP = e−( u v −1) 2/2σ (8) where u is the number of name tokens in system translation and v is the number of name tokens in reference translation. Finally the name-aware BLEU score is defined as: BLEUNA = BP · NP · exp  N X n=1 wn log wpn  (9) This new metric can also be applied to evaluate MT approaches which emphasize other types of facts such as events, by simply replacing name tokens by other fact tokens. 5 Experiments In this section we present the experimental results of NAMT compared to the baseline MT. 5.1 Data Set We used a large Chinese-English MT training corpus from various sources and genres (including newswire, web text, broadcast news and broadcast conversations) for our experiments. We also used some translation lexicon data and Wikipedia translations. The majority of the data sets were collected or made available by LDC for U.S. DARPA Translingual Information Detection, Extraction and Summarization (TIDES) program, Global Autonomous Language Exploitation (GALE) program, Broad Operational Language Translation (BOLT) program and National Institute of Standards and Technology (NIST) MT evaluations. The training corpus includes 1,686,458 sentence pairs. The joint name tagger extracted 1,890,335 name pairs (295,087 Persons, 1,269,056 Geopolitical entities and 326,192 Organizations). Four LMs, denoted LM1, LM2, LM3, and LM4, were trained from different English corpora. LM1 is a 7-gram LM trained on the target side of Chinese-English and Egyptian ArabicEnglish parallel text, English monolingual discussion forums data R1-R4 released in BOLT Phase 1 (LDC2012E04, LDC2012E16, LDC2012E21, LDC2012E54), and English Gigaword Fifth Edition (LDC2011T07). LM2 is a 7-gram LM trained only on the English monolingual discussion forums data listed above. LM3 is a 4-gram LM trained on the web genre among the target side of all parallel text (i.e., web text from pre-BOLT parallel text and BOLT released discussion forum parallel text). LM4 is a 4-gram LM trained on the English broadcast news and conversation transcripts released under the DARPA GALE program. Note that for LM4 training data, some transcripts were quick transcripts and quick rich transcripts released by LDC, and some were generated by running flexible alignment of closed captions or speech recognition output from LDC on the audio data (Venkataraman et al., 2004). In order to demonstrate the effectiveness and generality of our approach, we evaluated our approach on seven test sets from multiple genres and domains. We asked four annotators to annotate names in four reference translations of each sentence and an expert annotator to adjudicate results. The detailed statistics and name distribution of each test data set is shown in Table 1. The percentage of names occurred fewer than 5 times in training data are listed in the brackets in the last column of the table. 5.2 Overall Performance Besides the new name-aware MT metric, we also adopt two traditional metrics, TER to evaluate the overall translation performance and Named Entity Weak Accuracy (NEWA) (Hermjakob et al., 2008) to evaluate the name translation performance. TER measures the amount of edits required to change a system output into one of the reference translations. Specifically: TER = # of edits average # of reference words (10) Possible edits include insertion, substitution deletion and shifts of words. The NEWA metric is defined as follows. Using a manually assembled name variant table, we also support the matching of name variants (e.g., “World Health Organization” and “WHO”). NEWA = Count # of correctly translated names Count # of names in references (11) 608 Corpus Genre Sentence # Word # Token # GPE(%) PER(%) ORG(%) All names in source in reference (% occurred < 5) BOLT 1 forum 1,200 20,968 24,193 875(82.9) 90(8.5) 91(8.6) 1,056 (51.4) BOLT 2 forum 1,283 23,707 25,759 815(73.7) 141(12.8) 149(13.5) 1,105 (65.9) BOLT 3 forum 2,000 38,595 42,519 1,664(80.4) 204(9.8) 204(9.8) 2,072 (47.4) BOLT 4 forum 1,918 41,759 47,755 1,852(80.0) 348(25.0) 113(5.0) 2,313 (53.3) BOLT 5 blog 950 23,930 26,875 352(42.5) 235(28.3) 242(29.2) 829 (55.3) NIST2006 news&blog 1,664 38,442 45,914 1,660(58.2) 568(19.9) 625(21.9) 2,853 (73.1) NIST2008 news&blog 1,357 32,646 37,315 700(47.9) 367(25.1) 395(27.0) 1,462 (72.0) Table 1: Statistics and Name Distribution of Test Data Sets. Metric System BOLT 1 BOLT 2 BOLT 3 BOLT 4 BOLT 5 NIST2006 NIST2008 BLEU Baseline 14.2 14.0 17.3 15.6 15.3 35.5 29.3 NPhrase 14.1 14.4 17.1 15.4 15.3 35.4 29.3 NAMT 14.2 14.6 16.9 15.7 15.5 36.3 30.0 Name-aware BLEU Baseline 18.2 17.9 18.6 17.6 18.3 36.1 31.7 NPhrase 18.1 18.8 18.5 18.1 18.0 35.8 31.8 NAMT 18.4 19.5 19.7 18.2 18.9 39.4 33.1 TER Baseline 70.6 71.0 69.4 70.3 67.1 58.7 61.0 NPhrase 70.6 70.4 69.4 70.4 67.1 58.7 60.9 NAMT 70.3 70.2 69.2 70.1 66.6 57.7 60.5 NEWA All Baseline 69.7 70.1 73.9 72.3 60.6 66.5 60.4 NPhrase 69.8 71.1 73.8 72.5 60.6 68.3 61.9 NAMT 71.4 72.0 77.7 75.1 62.7 72.9 63.2 GPE Baseline 72.8 78.4 80.0 78.7 81.3 79.2 76.0 NPhrase 73.6 79.3 79.2 78.9 82.3 82.6 79.5 NAMT 74.2 80.2 82.8 80.4 79.3 85.5 79.3 PER Baseline 53.3 44.7 45.1 49.4 48.9 54.2 51.2 NPhrase 52.2 45.4 48.9 48.5 47.6 55.1 50.9 NAMT 55.6 45.4 58.8 55.2 56.2 60.0 52.3 ORG Baseline 56.0 49.0 52.9 38.1 41.7 44.0 41.3 NPhrase 50.5 50.3 54.4 40.7 41.3 42.2 40.7 NAMT 60.4 52.3 55.4 41.6 45.0 51.0 44.8 Table 2: Translation Performance (%). For better comparison with NAMT, besides the original baseline, we develop the other baseline system by adding name translation table into the phrase table (NPhrase). Table 2 presents the performance of overall translation and name translation. We can see that except for the BOLT3 data set with BLEU metric, our NAMT approach consistently outperformed the baseline system for all data sets with all metrics, and provided up to 23.6% relative error reduction on name translation. According to Wilcoxon Matched-Pairs Signed-Ranks Test, the improvement is not significant with BLEU metric, but is significant at 98% confidence level with all of the other metrics. The gains are more significant for formal genres than informal genres mainly because most of the training data for name tagging and name translation were from newswire. Furthermore, using external name translation table only did not improve translation quality in most test sets except for BOLT2. Therefore, it is important to use name-replaced corpora for rule extraction to fully take advantage of improved word alignment. Many errors from the baseline MT approach occurred because some parts of out-of-vocabulary names were mistakenly segmented into common words. For example, the baseline MT system mistakenly translated a person name “Y¢÷ (Sun Honglei)” into “Sun red thunder”. In informal genres such as discussion forums and web blogs, even common names often appear in rare forms due to misspelling or morphing. For example, “e8l (Obama)” was mistakenly translated into “Ma Olympic”. Such errors can be compounded when word re-ordering was applied. For example, the following sentence: “펎„›ÏØ/: 'J /i y (Guo Meimei’s strength really is formidable, I really admire her)” was mistakenly translated into “Guo the strength of the America and the America also really strong , ah , really admire her” by the baseline MT system because the person name “펎 (Guomeimei)” was mistakenly segmented into three words “í (Guo)”, “Ž (the America)” and “Ž (the America)”. But our NAMT approach successfully identified and translated this name and also generated better overall translation: “Guo Meimei ’s power is also really strong , ah , really admire her”. 609 BLEU Name-aware BLEU 0 2 4 6 8 10 12 14 16 18 20 Score Automatic Metrics Hum. 1 Hum. 2 Hum. 3 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 baseline NAMT Score Human Evaluation Figure 2: Scores based on Automatic Metrics and Human Evaluation. 5.3 Name-aware BLEU vs The Human Evaluation In order to investigate the correlation between name-aware BLEU scores and human judgment results, we asked three bi-lingual speakers to judge our translation output from the baseline system and the NAMT system, on a Chinese subset of 250 sentences (each sentence has two corresponding translations from baseline and NAMT) extracted randomly from 7 test corpora. The annotators rated each translation from 1 (very bad) to 5 (very good) and made their judgments based on whether the translation is understandable and conveys the same meaning. We computed the name-aware BLEU scores on the subset and also the aggregated average scores from human judgments. Figure 2 shows that NAMT consistently achieved higher scores with both name-aware BLEU metric and human judgement. Furthermore, we calculated three Pearson product-moment correlation coefficients between human judgment scores and name-aware BLEU scores of these two MT systems. Give the sample size and the correlation coefficient value, the high significance value of 0.99 indicates that nameaware BLEU tracks human judgment well. 5.4 Word Alignment It is also important to investigate the impact of our NAMT approach on improving word alignment. We conducted the experiment on the ChineseEnglish Parallel Treebank (Li et al., 2010) with ground-truth word alignment. The detailed procedure following NAMT framework is as follows: (1) Ran the joint bilingual name tagger; (2) Replaced each name string with its name type (PER, ORG or GPE), and ran Giza++ on the replaced sentences; (3) Ran Giza++ on the words within Words Method P R F Baseline Giza++ 69.8 47.8 56.7 Joint Name Tagging 70.4 48.1 57.1 Overall Words Ground-truth Name Tagging (Upper-bound) 71.3 48.9 58.0 Baseline Giza++ 86.0 31.4 46.0 Words Within Names Joint Name Tagging 77.6 37.2 50.3 Table 3: Impact of Joint Bilingual Name Tagging on Word Alignment (%). each name pair. (4) Merged (2) and (3) to produce the final word alignment results. In order to compare with the upper-bound gains, we also measured the performance of applying ground-truth name tagging with the above procedures. The experiment results are shown in Table 3. For the words within names, our approach provided significant gains by enhancing F-measure from 46.0% to 50.3%. Only 10.6% words are within names, therefore the upper-bound gains on overall word alignment is only 1.3%. Our joint name tagging approach achieved 0.4% (statistically significant) improvement over the baseline. In Figure 3 we categorized the sentences according to the percentage of name words in each sentence and measured the improvement for each category. We can clearly see that as the sentences include more names, the gains achieved by our approach tend to be greater. 5.5 Remaining Error Analysis Although the proposed model has significantly enhanced translation quality, some challenges remain. We analyze some major sources of the remaining errors as follows. 1. Name Structure Parsing. We found that the gains of our NAMT approach were mainly achieved for names with one or two components. When the name structure becomes too complicated to parse, name tagging and name translation are likely to produce errors, especially for long nested organizations. For example, “ä0 ¿ Àßb Í@” (Anti-malfeasance Bureau of Gutian County Procuratorate) consists of a nested organization name with a GPE as modifier: “ä 0¿ Àßb” (Gutian County Procuratorate) and an ORG name: “Í@” (Anti-malfeasance Bureau). 2. Name abbreviation tagging and translation. Some organization abbreviations are also difficult to extract because our name taggers have 610 0~10 10~20 20~30 30~40 >40 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 F-Measure Gains in Overall Word Alignment (%) #name tokens/#all tokens(%) Baseline Giza++ Joint Name Tagging Ground-truth Name Tagging (Upper-bound) Figure 3: Word alignment gains according to the percentage of name words in each sentence. not incorporated any coreference resolution techniques. For example, without knowing that “FAW” refers to “First Automotive Works” in “FAW has also utilized the capital market to directly finance, and now owns three domestic listed companies”, our system mistakenly labeled it as a GPE. The same challenge exists in name alignment and translation (for example, “ i (Min Ge)” refers to “ -ýýZi}ÔX” (Revolutionary Committee of the Chinese Kuomintang). 3. Cross-lingual information transfer English monolingual features normally generate higher confidence than Chinese features for ORG names. On the other hand, some good propagated Chinese features were not able to correct English results. For example, in the following sentence pair: “9n-ý ŒTý¾r ¹¾„... (in accordance with the tripartite agreement reached by China, Laos and the UNHCR on)...”, even though the tagger can successfully label “Tý¾r/UNHCR” as an organization because it is a common Chinese name, English features based on previous GPE contexts still incorrectly predicted “UNHCR” as a GPE name. 6 Related Work Two types of humble strategies were previously attempted to build name translation components which operate in tandem and loosely integrate into conventional statistical MT systems: 1. Pre-processing: identify names in the source texts and propose name translations to the MT system; the name translation results can be simply but aggressively transferred from the source to the target side using word alignment, or added into phrase table in order to enable the LM to decide which translations to choose when encountering the names in the texts (Ji et al., 2009). Heuristic rules or supervised models can be developed to create “do-not-translate” list (Babych and Hartley, 2003) or learn “when-to-transliterate” (Hermjakob et al., 2008). 2. Post-processing: in a cross-lingual information retrieval or question answering framework, online query names can be utilized to obtain translation and post-edit MT output (Parton et al., 2009; Ma and McKeown, 2009; Parton and McKeown, 2010; Parton et al., 2012). It is challenging to decide when to use name translation results. The simple transfer method ensures all name translations appear in the MT output, but it heavily relies on word alignment and does not take into account word re-ordering or the words found in a name’s context; therefore it could mistakenly break some context phrase structures due to name translation or alignment errors. The LM selection method often assigns an inappropriate weight to the additional name translation table because it is constructed independently from translation of context words; therefore after weighted voting most correct name translations are not used in the final translation output. Our experimental results 2 confirmed this weakness. More importantly, in these approaches the MT model was still mostly treated as a “black-box” because neither the translation model nor the LM was updated or adapted specifically for names. Recently the wider idea of incorporating semantics into MT has received increased interests. Most of them designed some certain semantic representations, such as predicate-argument structure or semantic role labeling (Wu and Fung, 2009; Liu and Gildea, 2009; Meyer et al., 2011; Bojar and Wu, 2012), word sense disambiguation (Carpuat and Wu, 2007b; Carpuat and Wu, 2007a) and graph-structured grammar representation (Jones et al., 2012). Lo et al. (2012) proposed a semantic role driven MT metric. However, none of these work declaratively exploited results from information extraction for MT. Some statistical MT systems (e.g. (Zens et al., 2005), (Aswani and Gaizauskas, 2005)) have attempted to use text normalization to improve word alignment for dates, numbers and job titles. But little reported work has shown the impact of joint 611 name tagging on overall word alignment. Most of the previous name translation work combined supervised transliteration approaches with LM based re-scoring (Knight and Graehl, 1998; Al-Onaizan and Knight, 2002; Huang et al., 2004). Some recent research used comparable corpora to mine name translation pairs (Feng et al., 2004; Kutsumi et al., 2004; Udupa et al., 2009; Ji, 2009; Fung and Yee, 1998; Rapp, 1999; Shao and Ng, 2004; Lu and Zhao, 2006; Hassan et al., 2007). However, most of these approaches required large amount of seeds, suffered from Information Extraction errors, and relied on phonetic similarity, context co-occurrence and document similarity for re-scoring. In contrast, our name pair mining approach described in this paper does not require any machine translation or transliteration features. 7 Conclusions and Future Work We developed a name-aware MT framework which tightly integrates name tagging and name translation into training and decoding of MT. Experiments on Chinese-English translation demonstrated the effectiveness of our approach over a high-quality MT baseline in both overall translation and name translation, especially for formal genres. We also proposed a new name-aware evaluation metric. In the future we intend to improve the framework by training a discriminative model to automatically assign weights to combine name translation and baseline translation with additional features including name confidence values, name types and global validation evidence, as well as conducting LM adaptation through bilingual topic modeling and clustering based on name annotations. We also plan to jointly optimize MT and name tagging by propagating multiple word segmentation and name annotation hypotheses in lattice structure to statistical MT and conduct latticebased decoding (Dyer et al., 2008). Furthermore, we are interested in extending this framework to translate other out-of-vocabulary terms. Acknowledgement This work was supported by the U.S. Army Research Laboratory under Cooperative Agreement No. W911NF- 09-2-0053 (NS-CTA), the U.S. NSF CAREER Award under Grant IIS-0953149, the U.S. NSF EAGER Award under Grant No. IIS1144111, the U.S. DARPA FA8750-13-2-0041 Deep Exploration and Filtering of Text (DEFT) Program and CUNY Junior Faculty Award. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on. We express our gratitude to Bing Zhao who provided the test sets and references that were used for Broad Operational Language Translation (BOLT) evaluation and thanks to Taylor Cassidy for constructive comments. References Y. Al-Onaizan and K. Knight. 2002. Translating Named Entities Using Monolingual and Bilingual Resources. In Proceeding ACL’02, pages 400–408. N. Aswani and R. Gaizauskas. 2005. A Hybrid Approach to Align Sentences and Words in EnglishHindi Parallel Corpora. In Proceeding ACL’05 Workshop on Building and Using Parallel Texts, pages 57–64. Bogdan Babych and Anthony Hartley. 2003. Improving Machine Translation Quality with Automatic Named Entity Recognition. In Proceeding EAMT ’03 workshop on MT and other Language Technology Tools, Improving MT through other Language Technology Tools: Resources and Tools for Building MT, pages 1–8. O. Bojar and D. Wu. 2012. Towards a PredicateArgument evaluation for MT. In Proceeding of the Sixth Workshop on Syntax, Semantics and Structure in Statistical Translation, pages 30–38, July. Marine Carpuat and Dekai Wu. 2007a. How Phrase Sense Disambiguation outperforms Word Sense Disambiguation for Statistical Machine Translation. In Proceeding TMI’07, pages 43–52. Marine Carpuat and Dekai Wu. 2007b. Improving Statistical Machine Translation using Word Sense Disambiguation. In Proceeding EMNLP-CoNLL’07, pages 61–72. Taylor Cassidy, Heng Ji, Hongbo Deng, Jing Zheng, and Jiawei Han. 2012. Analysis and Refinement of Cross-lingual Entity Linking. In Proceeding CLEF’12, pages 1–12. Stanley F. Chen and Joshua Goodman. 1996. An Empirical Study of Smoothing Techniques for Language Modeling. Proceeding of ACL’96, pages 310–318. 612 David Chiang. 2005. A Hierarchical Phrase-based Model for Statistical Machine Translation. In Proceeding ACL’05, pages 263–270. F. Dayne and K. Shahram. 2007. A Sequence Alignment Model Based on the Averaged Perceptron. In Proceeding EMNLP-CoNLL’07, pages 238–247. C. Dyer, S. Muresan, and P. Resnik. 2008. Generalizing Word Lattice Translation. In Proceeding ACLHLT’08, pages 1012–1020. D. Feng, Y. Lv, and M. Zhou. 2004. A New Approach for English-Chinese Named Entity Alignment. In Proceeding PACLIC’04, pages 372–379. R. Florian, H. Jing, N. Kambhatla, and I. Zitouni. 2006. Factorizing Complex Models: A Case Study in Mention Detection. In Proceeding COLINGACL’06, pages 473–480. P. Fung and L. Y. Yee. 1998. An IR Approach for Translating New Words from Nonparallel and Comparable Texts. In Proceeding COLING-ACL’98, pages 414–420. D. Hakkani-Tur, H. Ji, and R. Grishman. 2007. Using Information Extraction to Improve Cross-lingual Document Retrieval. In Proceeding RANLP Workshop on Multi-source, Multilingual Information Extraction and Summarization, pages 17–23. A. Hassan, H. Fahmy, and H. Hassan. 2007. Improving Named Entity Translation by Exploiting Comparable and Parallel Corpora. In Proceeding RANLP’07, pages 1–6. U. Hermjakob, K. Knight, and H. Daume III. 2008. Name Translation in Statistical Machine Translation: Learning When to Transliterate. In Proceeding ACL’08, pages 389–397. F. Huang, S. Vogel, and A. Waibel. 2004. Improving Named Entity Translation Combining Phonetic and Semantic Similarities. In Proceeding HLT/NAACL’04, pages 281–288. H. Ji and R. Grishman. 2006. Analysis and Repair of Name Tagger Errors. In Proceeding COLINGACL’06, pages 420–427. H. Ji, R. Grishman, D. Freitag, M. Blume, J. Wang, S. Khadivi, R. Zens, and H. Ney. 2009. Name Extraction and Translation for Distillation. Handbook of Natural Language Processing and Machine Translation: DARPA Global Autonomous Language Exploitation. H. Ji. 2009. Mining Name Translations from Comparable Corpora by Creating Bilingual Information Networks. In Proceeding ACL-IJCNLP’09 workshop on Building and Using Comparable Corpora, pages 34–37. B. Jones, J. Andreas, D. Bauer, K. M. Hermann, and K. Knight. 2012. Semantics-Based Machine Translation with Hyperedge Replacement Grammars. In Proceeding COLING’12, pages 1359–1376. K. Knight and J. Graehl. 1998. Machine Transliteration. In Computational Linguistics, volume 24, pages 599–612, Cambridge, MA, USA, December. MIT Press. P. Koehn, F. Josef Och, and D. Marcu. 2003. Statistical Phrase-Based Translation. In Proceeding HLTNAACL’03, pages 127–133. T. Kutsumi, T. Yoshimi, K. Kotani, and I. Sata. 2004. Integrated Use of Internal and External Evidence in The Alignment of Multi-Word Named Entities. In Proceeding PACLIC’04, pages 187–196. X. Li, S. Strassel, S. Grimes, S. Ismael, X. Ma, N. Ge, A. Bies, N. Xue, and M. Maamouri. 2010. Parallel Aligned Treebank Corpora at LDC: Methodology, Annotation and Integration. In Workshop on Annotation and Exploitation of Parallel Corpora (AEPC). Q. Li, H. Li, H. Ji, W. Wang, J. Zheng, and F. Huang. 2012. Joint Bilingual Name Tagging for Parallel Corpora. In Proceeding CIKM’12, pages 1727– 1731. D. Liu and D. Gildea. 2009. Semantic Role Features for Machine Translation. In Proceeding COLING’09, pages 716–724. C. Lo, A. K. Tumuluru, and D. Wu. 2012. Fully Automatic Semantic MT Evaluation. In Proceeding of the Seventh Workshop on Statistical Machine Translation, pages 243–252. M. Lu and J. Zhao. 2006. Multi-feature based Chinese-English Named Entity Extraction from Comparable Corpora. In Proceeding PACLIC’06, pages 134–141. W. Ma and K. McKeown. 2009. Where’s the Verb Correcting Machine Translation During Question Answering. In Proceeding ACL-IJCNLP’09, pages 333–336. P. McNamee, J. Mayfield, D. Lawrie, D. W. Oard, and D. Doermann. 2011. Cross-Language Entity Linking. In Proceeding IJCNLP’11. A. Meyer, M. Kosaka, S. Liao, and N. Xue. 2011. Improving MT Word Alignment Using Aligned MultiStage Parses. In Proceeding ACL-HLT 2011 Workshop on Syntax, Semantics and Structure in Statistical Translation, pages 88–97. T. T. Nguyen, A. Moschitti, and G. Riccardi. 2010. Kernel-based Reranking for Named-Entity Extraction. In Proceeding COLING’10, pages 901–909. F. J. Och and H. Ney. 2003. A Systematic Comparison of Various Statistical Alignment Models. Computational Linguistics, 29(1):19–51. 613 F. J. Och. 2003. Minimum Error Rate Training in Statistical Machine Translation. In Proceeding ACL’03, pages 160–167. K. Papineni, S. Roukos, T. Ward, and W. Zhu. 2002. BLEU: a Method for Automatic Evaluation of Machine Translation. In Proceeding ACL’02, pages 311–318. K. Parton and K. McKeown. 2010. MT Error Detection for Cross-Lingual Question Answering. Proceeding COLING’10, pages 946–954. K. Parton, K. R. McKeown, R. Coyne, M. T. Diab, R. Grishman, D. Hakkani-Tur, M. Harper, H. Ji, W. Y. Ma, A. Meyers, S. Stolbach, A. Sun, G. Tur, W. Xu, and S. Yaman. 2009. Who, What, When, Where, Why? Comparing Multiple Approaches to the Cross-Lingual 5W Task. In Proceeding ACLIJCNLP’09, pages 423–431. K. Parton, N. Habash, K. McKeown, G. Iglesias, and A. de Gispert. 2012. Can Automatic PostEditing Make MT More Meaningful? In Proceeding EAMT’12, pages 111–118. R. Rapp. 1999. Automatic Identification of Word Translations from Unrelated English and German Corpora. In Proceeding ACL’99, pages 519–526. L. Shao and H. T. Ng. 2004. Mining New Word Translations from Comparable Corpora. In Proceeding COLING’04. M. Snover, B. Dorr, R. Schwartz, L. Micciulla, and J. Makhoul. 2006. A Study of Translation Edit Rate with Targeted Human Annotation. In Proceeding of Association for Machine Translation in the Americas, pages 223–231. M. Snover, X. Li, W. Lin, Z. Chen, S. Tamang, M. Ge, A. Lee, Q. Li, H. Li, S. Anzaroot, and H. Ji. 2011. Cross-lingual Slot Filling from Comparable Corpora. In Proceeding ACL’11 Worshop on Building and Using Comparable Corpora, pages 110–119. D. Talbot and T. Brants. 2008. Randomized Language Models via Perfect Hash Functions. In Proceeding of ACL/HLT’08, pages 505–513. R. Udupa, K. Saravanan, A. Kumaran, and J. Jagarlamudi. 2009. MINT: A Method for Effective and Scalable Mining of Named Entity Transliterations from Large Comparable Corpora. In Proceeding EACL’09, pages 799–807. A. Venkataraman, A. Stolcke, W. Wang, D. Vergyri, V. R. R. Gadde, and J. Zheng. 2004. An Efficient Repair Procedure For Quick Transcriptions. In Proceeding INTERSPEECH’04, pages 1961–1964. D. Wu and P. Fung. 2009. Semantic Roles for SMT: A Hybrid Two-Pass Model. In NAACL HLT’09, pages 13–16. R. Zens, O. Bender, S. Hasan, S. Khadivi, E. Matusov, J. Xu, Y. Zhang, and H. Ney. 2005. The RWTH Phrase-based Statistical Machine Translation System. In Proceeding IWSLT’05, pages 155–162. J. Zheng, N. F. Ayan, W. Wang, and D. Burkett. 2009. Using Syntax in Large-Scale Audio Document Translation. In Proceeding Interspeech’09, pages 440–443. J. Zheng. 2008. SRInterp: SRI’s Scalable Multipurpose SMT Engine. In Technical Report. I. Zitouni and R. Florian. 2008. Mention Detection Crossing the Language Barrier. In Proceeding EMNLP’08, pages 600–609. 614
2013
59
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 53–63, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Grounded Language Learning from Video Described with Sentences Haonan Yu and Jeffrey Mark Siskind Purdue University School of Electrical and Computer Engineering 465 Northwestern Ave. West Lafayette, IN 47907-2035 USA [email protected], [email protected] Abstract We present a method that learns representations for word meanings from short video clips paired with sentences. Unlike prior work on learning language from symbolic input, our input consists of video of people interacting with multiple complex objects in outdoor environments. Unlike prior computer-vision approaches that learn from videos with verb labels or images with noun labels, our labels are sentences containing nouns, verbs, prepositions, adjectives, and adverbs. The correspondence between words and concepts in the video is learned in an unsupervised fashion, even when the video depicts simultaneous events described by multiple sentences or when different aspects of a single event are described with multiple sentences. The learned word meanings can be subsequently used to automatically generate description of new video. 1 Introduction People learn language through exposure to a rich perceptual context. Language is grounded by mapping words, phrases, and sentences to meaning representations referring to the world. Siskind (1996) has shown that even with referential uncertainty and noise, a system based on crosssituational learning can robustly acquire a lexicon, mapping words to word-level meanings from sentences paired with sentence-level meanings. However, it did so only for symbolic representations of word- and sentence-level meanings that were not perceptually grounded. An ideal system would not require detailed word-level labelings to acquire word meanings from video but rather could learn language in a largely unsupervised fashion, just as a child does, from video paired with sentences. There has been recent research on grounded language learning. Roy (2002) pairs training sentences with vectors of real-valued features extracted from synthesized images which depict 2D blocks-world scenes, to learn a specific set of features for adjectives, nouns, and adjuncts. Yu and Ballard (2004) paired training images containing multiple objects with spoken name candidates for the objects to find the correspondence between lexical items and visual features. Dominey and Boucher (2005) paired narrated sentences with symbolic representations of their meanings, automatically extracted from video, to learn object names, spatial-relation terms, and event names as a mapping from the grammatical structure of a sentence to the semantic structure of the associated meaning representation. Chen and Mooney (2008) learned the language of sportscasting by determining the mapping between game commentaries and the meaning representations output by a rulebased simulation of the game. Kwiatkowski et al. (2012) present an approach that learns Montaguegrammar representations of word meanings together with a combinatory categorial grammar (CCG) from child-directed sentences paired with first-order formulas that represent their meaning. Although most of these methods succeed in learning word meanings from sentential descriptions they do so only for symbolic or simple visual input (often synthesized); they fail to bridge the gap between language and computer vision, i.e., they do not attempt to extract meaning representations from complex visual scenes. On the other hand, there has been research on training object and event models from large corpora of complex images and video in the computer-vision community (Kuznetsova et al., 2012; Sadanand and Corso, 2012; Kulkarni et al., 2011; Ordonez et al., 2011; Yao et al., 2010). However, most such work requires training data that labels individual concepts with individual words (i.e., ob53 jects delineated via bounding boxes in images as nouns and events that occur in short video clips as verbs). There is no attempt to model phrasal or sentential meaning, let alone acquire the object or event models from training data labeled with phrasal or sentential annotations. Moreover, such work uses distinct representations for different parts of speech; i.e., object and event recognizers use different representations. In this paper, we present a method that learns representations for word meanings from short video clips paired with sentences. Our work differs from prior work in three ways. First, our input consists of realistic video filmed in an outdoor environment. Second, we learn the entire lexicon, including nouns, verbs, prepositions, adjectives, and adverbs, simultaneously from video described with whole sentences. Third we adopt a uniform representation for the meanings of words in all parts of speech, namely Hidden Markov Models (HMMs) whose states and distributions allow for multiple possible interpretations of a word or a sentence in an ambiguous perceptual context. We employ the following representation to ground the meanings of words, phrases, and sentences in video clips. We first run an object detector on each video frame to yield a set of detections, each a subregion of the frame. In principle, the object detector need just detect the objects rather than classify them. In practice, we employ a collection of class-, shape-, pose-, and viewpoint-specific detectors and pool the detections to account for objects whose shape, pose, and viewpoint may vary over time. Our methods can learn to associate a single noun with detections produced by multiple detectors. We then string together detections from individual frames to yield tracks for objects that temporally span the video clip. We associate a feature vector with each frame (detection) of each such track. This feature vector can encode image features (including the identity of the particular detector that produced that detection) that correlate with object class; region color, shape, and size features that correlate with object properties; and motion features, such as linear and angular object position, velocity, and acceleration, that correlate with event properties. We also compute features between pairs of tracks to encode the relative position and motion of the pairs of objects that participate in events that involve two participants. In principle, we can also compute features between tuples of any number of tracks. Following Yamoto et al. (1992), Siskind and Morris (1996), and Starner et al. (1998), we represent the meaning of an intransitive verb, like jump, as a two-state HMM over the velocity-direction feature, modeling the requirement that the participant move upward then downward. We represent the meaning of a transitive verb, like pick up, as a two-state HMM over both single-object and object-pair features: the agent moving toward the patient while the patient is as rest, followed by the agent moving together with the patient. We extend this general approach to other parts of speech. Nouns, like person, can be represented as one-state HMMs over image features that correlate with the object classes denoted by those nouns. Adjectives, like red, round, and big, can be represented as one-state HMMs over region color, shape, and size features that correlate with object properties denoted by such adjectives. Adverbs, like quickly, can be represented as one-state HMMs over object-velocity features. Intransitive prepositions, like leftward, can be represented as one-state HMMs over velocity-direction features. Static transitive prepositions, like to the left of, can be represented as one-state HMMs over the relative position of a pair of objects. Dynamic transitive prepositions, like towards, can be represented as HMMs over the changing distance between a pair of objects. Note that with this formulation, the representation of a verb, like approach, might be the same as a dynamic transitive preposition, like towards. While it might seem like overkill to represent the meanings of words as one-stateHMMs, in practice, we often instead encode such concepts with multiple states to allow for temporal variation in the associated features due to changing pose and viewpoint as well as deal with noise and occlusion. Moreover, the general framework of modeling word meanings as temporally variant time series via multi-state HMMs allows one to model denominalized verbs, i.e., nouns that denote events, as in The jump was fast. Our HMMs are parameterized with varying arity. Some, like jump(α), person(α), red(α), round(α), big(α), quickly(α), and leftward(α) have one argument, while others, like pick-up(α, β), to-the-left-of(α, β), and towards(α, β), have two arguments (In principle, any arity can be supported.). HMMs are instantiated by mapping their arguments to tracks. This 54 involves computing the associated feature vector for that HMM over the detections in the tracks chosen to fill its arguments. This is done with a two-step process to support compositional semantics. The meaning of a multi-word phrase or sentence is represented as a joint likelihood of the HMMs for the words in that phrase or sentence. Compositionality is handled by linking or coindexing the arguments of the conjoined HMMs. Thus a sentence like The person to the left of the backpack approached the trashcan would be represented as a conjunction of person(p0), to-the-left-of(p0, p1), backback(p1), approached(p0, p2), and trash-can(p2) over the three participants p0, p1, and p2. This whole sentence is then grounded in a particular video by mapping these participants to particular tracks and instantiating the associated HMMs over those tracks, by computing the feature vectors for each HMM from the tracks chosen to fill its arguments. Our algorithm makes six assumptions. First, we assume that we know the part of speech Cm associated with each lexical entry m, along with the part-of-speech dependent number of states Ic in the HMMs used to represent word meanings in that part of speech, the part-of-speech dependent number of features Nc in the feature vectors used by HMMs to represent word meanings in that part of speech, and the part-of-speech dependent feature-vector computation Φc used to compute the features used by HMMs to represent word meanings in that part of speech. Second, we pair individual sentences each with a short video clip that depicts that sentence. The algorithm is not able to determine the alignment between multiple sentences and longer video segments. Note that there is no requirement that the video depict only that sentence. Other objects may be present and other events may occur. In fact, nothing precludes a training corpus with multiple copies of the same video, each paired with a different sentence describing a different aspect of that video. Moreover, our algorithm potentially can handle a small amount of noise, where a video clip is paired with an incorrect sentence that the video does not depict. Third, we assume that we already have (pre-trained) low-level object detectors capable of detecting instances of our target event participants in individual frames of the video. We allow such detections to be unreliable; our method can handle a moderate amount of false positives and false negatives. We do not need to know the mapping from these object-detection classes to words; our algorithm determines that. Fourth, we assume that we know the arity of each word in the corpus, i.e., the number of arguments that that word takes. For example, we assume that we know that the word person(α) takes one argument and the word approached(α, β) takes two arguments. Fifth, we assume that we know the total number of distinct participants that collectively fill all of the arguments for all of the words in each training sentence. For example, for the sentence The person to the left of the backpack approached the trash-can, we assume that we know that there are three distinct objects that participate in the event denoted. Sixth, we assume that we know the argument-to-participant mapping for each training sentence. Thus, for example, for the above sentence we would know person(p0), to-the-left-of(p0, p1), backback(p1), approached(p0, p2), and trash-can(p2). The latter two items can be determined by parsing the sentence, which is what we do. One can imagine learning the ability to automatically perform the latter two items, and even the fourth item above, by learning the grammar and the part of speech of each word, such as done by Kwiatkowski et al. (2012). We leave such for future work. Figure 1 illustrates a single frame from a potential training sample provided as input to our learner. It consists of a video clip paired with a sentence, where the arguments of the words in the sentence are mapped to participants. From a sequence of such training samples, our learner determines the objects tracks and the mapping from participants to those tracks, together with the meanings of the words. The remainder of the paper is organized as follows. Section 2 generally describes our problem of lexical acquisition from video. Section 3 introduces our work on the sentence tracker, a method for jointly tracking the motion of multiple objects in a video that participate in a sententiallyspecified event. Section 4 elaborates on the details of our problem formulation in the context of this sentence tracker. Section 5 describes how to generalize and extend the sentence tracker so that it can be used to support lexical acquisition. We demonstrate this lexical acquisition algorithm on a small example in Section 6. Finally, we conclude with a discussion in Section 7. 55 1 3 2 0 The person to the left of the backpack carried the trash-can towards the chair. α p0 α p0 β p1 α p1 α p0 β p2 α p2 α p0 β p3 α p3 Track 3 Track 0 Track 1 Track 2 Figure 1: An illustration of our problem. Each word in the sentence has one or more arguments (α and possibly β), each argument of each word is assigned to a participant (p0, . . . , p3) in the event described by the sentence, and each participant can be assigned to any object track in the video. This figure shows a possible (but erroneous) interpretation of the sentence where the mapping is: p0 7→Track 3, p1 7→Track 0, p2 7→Track 1, and p3 7→Track 2, which might (incorrectly) lead the learner to conclude that the word person maps to the backpack, the word backpack maps to the chair, the word trash-can maps to the trash-can, and the word chair maps to the person. 2 General Problem Formulation Throughout this paper, lowercase letters are used for variables or hidden quantities while uppercase ones are used for constants or observed quantities. We are given a lexicon {1, . . . , M}, letting m denote a lexical entry. We are given a sequence D = (D1, . . . , DR) of video clips Dr, each paired with a sentence Sr from a sequence S = (S1, . . . , SR) of sentences. We refer to Dr paired with Sr as a training sample. Each sentence Sr is a sequence (Sr,1, . . . , Sr,Lr) of words Sr,l, each an entry from the lexicon. A given entry may potentially appear in multiple sentences and even multiple times in a given sentence. For example, the third word in the first sentence might be the same entry as the second word in the fourth sentence, in which case S1,3 = S4,2. This is what allows cross-situational learning in our algorithm. Let us assume, for a moment, that we can process each video clip Dr to yield a sequence (τr,1, . . . , τr,Ur) of object tracks τr,u. Let us also assume that Dr is paired with a sentence Sr = The person approached the chair, specified to have two participants, pr,0 and pr,1, with the mapping person(pr,0), chair(pr,1), and approached(pr,0, pr,1). Let us further assume, for a moment, that we are given a mapping from participants to object tracks, say pr,0 7→τr,39 and pr,1 7→ τr,51. This would allow us to instantiate the HMMs with object tracks for a given video clip: person(τr,39), chair(τr,51), and approached(τr,39, τr,51). Let us further assume that we can score each such instantiated HMM and aggregate the scores for all of the words in a sentence to yield a sentence score and further aggregate the scores for all of the sentences in the corpus to yield a corpus score. However, we don’t know the parameters of the HMMs. These constitute the unknown meanings of the words in our corpus which we wish to learn. The problem is to simultaneously determine (a) those parameters along with (b) the object tracks and (c) the mapping from participants to object tracks. We do this by finding (a)–(c) that maximizes the corpus score. 3 The Sentence Tracker Barbu et al. (2012a) presented a method that first determines object tracks from a single video clip and then uses these fixed tracks with HMMs to recognize actions corresponding to verbs and construct sentential descriptions with templates. Later Barbu et al. (2012b) addressed the problem of solving (b) and (c), for a single object track constrained by a single intransitive verb, without solving (a), in the context of a single video clip. Our group has generalized this work to yield an algorithm called the sentence tracker which operates by way of a factorial HMM framework. We introduce that here as the foundation of our extension. Each video clip Dr contains Tr frames. We run an object detector on each frame to yield a set Dt r of detections. Since our object detector is unreliable, we bias it to have high recall but low precision, yielding multiple detections in each frame. We form an object track by selecting a single detection for that track for each frame. For a moment, let us consider a single video clip with length T, with detections Dt in frame t. Further, let us assume that we seek a single object track in that video clip. Let jt denote the index of the detection from Dt in frame t that is selected to form the track. The object detector scores each detection. Let F(Dt, jt) denote that score. More56 over, we wish the track to be temporally coherent; we want the objects in a track to move smoothly over time and not jump around the field of view. Let G(Dt−1, jt−1, Dt, jt) denote some measure of coherence between two detections in adjacent frames. (One possible such measure is consistency of the displacement of Dt relative to Dt−1 with the velocity of Dt−1 computed from the image by optical flow.) One can select the detections to yield a track that maximizes both the aggregate detection score and the aggregate temporal coherence score. max j1,...,jT       T X t=1 F(Dt, jt) + T X t=2 G(Dt−1, jt−1, Dt, jt)       (1) This can be determined with the Viterbi (1967) algorithm and is known as detection-based tracking (Viterbi, 1971). Recall that we model the meaning of an intransitive verb as an HMM over a time series of features extracted for its participant in each frame. Let λ denote the parameters of this HMM, (q1, . . . , qT ) denote the sequence of states qt that leads to an observed track, B(Dt, jt, qt, λ) denote the conditional log probability of observing the feature vector associated with the detection selected by jt among the detections Dt in frame t, given that the HMM is in state qt, and A(qt−1, qt, λ) denote the log transition probability of the HMM. For a given track (j1, . . . , jT ), the state sequence that yields the maximal likelihood is given by: max q1,...,qT       T X t=1 B(Dt, jt, qt, λ) + T X t=2 A(qt−1, qt, λ)       (2) which can also be found by the Viterbi algorithm. A given video clip may depict multiple objects, each moving along its own trajectory. There may be both a person jumping and a ball rolling. How are we to select one track over the other? The key insight of the sentence tracker is to bias the selection of a track so that it matches an HMM. This is done by combining the cost function of Eq. 1 with the cost function of Eq. 2 to yield Eq. 3, which can also be determined using the Viterbi algorithm. This is done by forming the cross product of the two lattices. This jointly selects the optimal detections to form the track, together with the optimal state sequence, and scores that combination. max j1,...,jT q1,...,qT        T X t=1 F(Dt, jt) + B(Dt, jt, qt, λ) + T X t=2 G(Dt−1, jt−1, Dt, jt) + A(qt−1, qt, λ)        (3) While we formulated the above around a single track and a word that contains a single participant, it is straightforward to extend this so that it supports multiple tracks and words of higher arity by forming a larger cross product. When doing so, we generalize jt to denote a sequence of detections from Dt, one for each of the tracks. We further need to generalize F so that it computes the joint score of a sequence of detections, one for each track, G so that it computes the joint measure of coherence between a sequence of pairs of detections in two adjacent frames, and B so that it computes the joint conditional log probability of observing the feature vectors associated with the sequence of detections selected by jt. When doing this, note that Eqs. 1 and 3 maximize over j1, . . . , jT which denotes T sequences of detection indices, rather than T individual indices. It is further straightforward to extend the above to support a sequence (S1, . . . , SL) of words Sl denoting a sentence, each of which applies to different subsets of the multiple tracks, again by forming a larger cross product. When doing so, we generalize qt to denote a sequence (qt 1, . . . , qt L) of states qt l, one for each word l in the sentence, and use ql to denote the sequence (q1 l , . . . , qT l ) and q to denote the sequence (q1, . . . , qL). We further need to generalize B so that it computes the joint conditional log probability of observing the feature vectors for the detections in the tracks that are assigned to the arguments of the HMM for each word in the sentence and A so that it computes the joint log transition probability for the HMMs for all words in the sentence. This allows selection of an optimal sequence of tracks that yields the highest score for the sentential meaning of a sequence of words. Modeling the meaning of a sentence through a sequence of words whose meanings are modeled by HMMs, defines a factorial HMM for that sentence, since the overall Markov process for that sentence can be factored into inde57 pendent component processes (Brand et al., 1997; Zhong and Ghosh, 2001) for the individual words. In this view, q denotes the state sequence for the combined factorial HMM and ql denotes the factor of that state sequence for word l. The remainder of this paper wraps this sentence tracker in Baum Welch (Baum et al., 1970; Baum, 1972). 4 Detailed Problem Formulation We adapt the sentence tracker to training a corpus of R video clips, each paired with a sentence. Thus we augment our notation, generalizing jt to jt r and qt l to qt r,l. Below, we use jr to denote (j1 r, . . . , jTr r ), j to denote (j1, . . . , jR), qr,l to denote (q1 r,l, . . . , qTr r,l), qr to denote (qr,1, . . . , qr,Lr), and q to denote (q1, . . . , qR). We use discrete features, namely natural numbers, in our feature vectors, quantized by a binning process. We assume the part of speech of entry m is known as Cm. The length of the feature vector may vary across parts of speech. Let Nc denote the length of the feature vector for part of speech c, xr,l denote the time-series (x1 r,l, . . . , xTr r,l) of feature vectors xt r,l, associated with Sr,l (which recall is some entry m), and xr denote the sequence (xr,1, . . . , xr,Lr). We assume that we are given a function Φc(Dt r, jt r) that computes the feature vector xt r,l for the word Sr,l whose part of speech is CSr,l = c. Note that we allow Φ to be dependent on c allowing different features to be computed for different parts of speech, since we can determine m and thus Cm from Sr,l. We choose to have Nc and Φc depend on the part of speech c and not on the entry m since doing so would be tantamount to encoding the to-be-learned word meaning in the provided feature-vector computation. The goal of training is to find a sequence λ = (λ1, . . . , λM) of parameters λm that best explains the R training samples. The parameters λm constitute the meaning of the entry m in the lexicon. Collectively, these are the initial state probabilities am 0,k, for 1 ≤k ≤ICm, the transition probabilities am i,k, for 1 ≤i, k ≤ICm, and the output probabilities bm i,n(x), for 1 ≤i ≤ICm and 1 ≤n ≤NCm, where ICm denotes the number of states in the HMM for entry m. Like before, we could have a distinct Im for each entry m but instead have ICm depend only on the part of speech of entry m, and assume that we know the fixed I for each part of speech. In our case, bm i,n is a discrete distribution because the features are binned. 5 The Learning Algorithm Instantiating the above approach requires a definition for what it means to best explain the R training samples. Towards this end, we define the score of a video clip Dr paired with sentence Sr given the parameter set λ to characterize how well this training sample is explained. While the cost function in Eq. 3 may qualify as a score, it is easier to fit a likelihood calculation into the Baum-Welch framework than a MAP estimate. Thus we replace the max in Eq. 3 with a P and redefine our scoring function as follows: L(Dr; Sr, λ) = X jr P(jr|Dr)P(xr|Sr, λ) (4) The score in Eq. 4 can be interpreted as an expectation of the HMM likelihood over all possible mappings from participants to all possible tracks. By definition, P(jr|Dr) = V (Dr,jr) P j′r V (Dr,j′r), where the numerator is the score of a particular track sequence jr while the denominator sums the scores over all possible track sequences. The log of the numerator V (Dr, jr) is simply Eq. 1 without the max. The log of the denominator can be computed efficiently by the forward algorithm (Baum and Petrie, 1966). The likelihood for a factorial HMM can be computed as: P(xr|Sr, λ) = X qr Y l P(xr,l, qr,l|Sr,l, λ) (5) i.e., summing the likelihoods for all possible state sequences. Each summand is simply the joint likelihood for all the words in the sentence conditioned on a state sequence qr. For HMMs we have P(xr,l, qr,l|Sr,l, λ) = Y t aSr,l qt−1 r,l ,qt r,l Y n bSr,l qt r,l,n(xt r,l,n) (6) Finally, for a training corpus of R samples, we seek to maximize the joint score: L(D; S, λ) = Y r L(Dr; Sr, λ) (7) A local maximum can be found by employing the Baum-Welch algorithm (Baum et al., 1970; Baum, 1972). By constructing an auxiliary function (Bilmes, 1997), one can derive the reestimation formulas in Eq. 8, where xt r,l,n = h denotes the selection of all possible jt r such that the nth 58 am i,k = θm i R X r=1 Lr X l=1 s.t.Sr,l=m Tr X t=1 L(qt−1 r,l = i, qt r,l = k, Dr; Sr, λ′) L(Dr; Sr, λ′) | {z } ξ(r,l,i,k,t) bm i,n(h) = ψm i,n R X r=1 Lr X l=1 s.t.Sr,l=m Tr X t=1 L(qt r,l = i, xt r,l,n = h, Dr; Sr, λ′) L(Dr; Sr, λ′) | {z } γ(r,l,n,i,h,t) (8) feature computed by ΦCm(Dt r, jt r) is h. The coefficients θm i and ψm i,n are for normalization. The reestimation formulas involve occurrence counting. However, since we use a factorial HMM that involves a cross-product lattice and use a scoring function derived from Eq. 3 that incorporates both tracking (Eq. 1) and word models (Eq. 2), we need to count the frequency of transitions in the whole cross-product lattice. As an example of such cross-product occurrence counting, when counting the transitions from state i to k for the lth word from frame t −1 to t, i.e., ξ(r, l, i, k, t), we need to count all the possible paths through the adjacent factorial states (jt−1 r , qt−1 r,1 , . . . , qt−1 r,Lr) and (jt r, qt r,1, . . . , qt r,Lr) such that qt−1 r,l = i and qt r,l = k. Similarly, when counting the frequency of being at state i while observing h as the nth feature in frame t for the lth word of entry m, i.e., γ(r, l, n, i, h, t), we need to count all the possible paths through the factorial state (jt r, qt r,1, . . . , qt r,Lr) such that qt r,l = i and the nth feature computed by ΦCm(Dt r, jt r) is h. The reestimation of a single component HMM can depend on the previous estimate for other component HMMs. This dependence happens because of the argument-to-participant mapping which coindexes arguments of different component HMMs to the same track. It is precisely this dependence that leads to cross-situational learning of two kinds: both inter-sentential and intra-sentential. Acquisition of a word meaning is driven across sentences by entries that appear in more than one training sample and within sentences by the requirement that the meanings of all of the individual words in a sentence be consistent with the collective sentential meaning. 6 Experiment We filmed 61 video clips (each 3–5 seconds at 640×480 resolution and 40 fps) that depict a variety of different compound events. Each clip depicts multiple simultaneous events between some S →NP VP NP →D N [PP] D →the N →person | backpack | trash-can | chair PP →P NP P →to the left of | to the right of VP →V NP [ADV] [PPM] V →picked up | put down | carried | approached ADV →quickly | slowly PPM →PM NP PM →towards | away from Table 1: The grammar used for our annotation and generation. Our lexicon contains 1 determiner, 4 nouns, 2 spatial relation prepositions, 4 verbs, 2 adverbs, and 2 motion prepositions for a total of 15 lexical entries over 6 parts of speech. subset of four objects: a person, a backpack, a chair, and a trash-can. These clips were filmed in three different outdoor environments which we use for cross validation. We manually annotated each video with several sentences that describe what occurs in that video. The sentences were constrained to conform to the grammar in Table 1. Our corpus of 159 training samples pairs some videos with more than one sentence and some sentences with more than one video, with an average of 2.6 sentences per video 1. We model and learn the semantics of all words except determiners. Table 2 specifies the arity, the state number Ic, and the features computed by Φc for the semantic models for words of each part of speech c. While we specify a different subset of features for each part of speech, we presume that, in principle, with enough training data, we could include all features in all parts of speech and automatically learn which ones are noninformative and lead to uniform distributions. We use an off-the-shelf object detector (Felzenszwalb et al., 2010a; Felzenszwalb et al., 2010b) which outputs detections in the form of scored axis-aligned rectangles. We trained four object detectors, one for each of the four object classes in 1Our code, videos, and sentential annotations are available at http://haonanyu.com/research/ acl2013/. 59 c arity Ic Φc N 1 1 α detector index V 2 3 α VEL MAG α VEL ORIENT β VEL MAG β VEL ORIENT α-β DIST α-β size ratio P 2 1 α-β x-position ADV 1 3 α VEL MAG PM 2 3 α VEL MAG α-β DIST Table 2: Arguments and model configurations for different parts of speech c. VEL stands for velocity, MAG for magnitude, ORIENT for orientation, and DIST for distance. our corpus: person, backpack, chair, and trashcan. For each frame, we pick the two highestscoring detections produced by each object detector and pool the results yielding eight detections per frame. Having a larger pool of detections per frame can better compensate for false negatives in the object detection and potentially yield smoother tracks but it increases the size of the lattice and the concomitant running time and does not lead to appreciably better performance on our corpus. We compute continuous features, such as velocity, distance, size ratio, and x-position solely from the detection rectangles and quantize the features into bins as follows: velocity To reduce noise, we compute the velocity of a participant by averaging the optical flow in the detection rectangle. The velocity magnitude is quantized into 5 levels: absolutely stationary, stationary, moving, fast moving, and quickly. The velocity orientation is quantized into 4 directions: left, up, right, and down. distance We compute the Euclidean distance between the detection centers of two participants, which is quantized into 3 levels: near, normal, and far away. size ratio We compute the ratio of detection area of the first participant to the detection area of the second participant, quantized into 2 possibilities: larger/smaller than. x-position We compute the difference between the x-coordinates of the participants, quantized into 2 possibilities: to the left/right of. The binning process was determined by a preprocessing step that clustered a subset of the training data. We also incorporate the index of the detector that produced the detection as a feature. The particular features computed for each part of speech are given in Table 2. Note that while we use English phrases, like to the left of, to refer to particular bins of particular features, and we have object detectors which we train on samples of a particular object class such as backpack, such phrases are only mnemonic of the clustering and object-detector training process. We do not have a fixed correspondence between the lexical entries and any particular feature value. Moreover, that correspondence need not be oneto-one: a given lexical entry may correspond to a (time variant) constellation of feature values and any given feature value may participate in the meaning of multiple lexical entries. We perform a three-fold cross validation, taking the test data for each fold to be the videos filmed in a given outdoor environment and the training data for that fold to be all training samples that contain other videos. For testing, we hand selected 24 sentences generated by the grammar in Table 1, where each sentence is true for at least one test video. Half of these sentences (designated NV) contain only nouns and verbs while the other half (designated ALL) contain other parts of speech. The latter are longer and more complicated than the former. We score each testing video paired with every sentence in both NV and ALL. To evaluate our results, we manually annotated the correctness of each such pair. Video-sentence pairs could be scored with Eq. 4. However, the score depends on the sentence length, the collective numbers of states and features in the HMMs for words in that sentence, and the length of the video clip. To render the scores comparable across such variation we incorporate a sentence prior to the per-frame score: ˆL(Dr, Sr; λ) = [L(Dr; Sr, λ)] 1 Tr π(Sr) (9) where π(Sr) = exp Lr X l=1     E(ICSr,l) + NCSr,l X n=1 E(ZCSr,l,n)     (10) In the above, ZCSr,l,n is the number of bins for the nth feature of Sr,l of part of speech CSr,l and E(Y ) = −PY y=1 1 Y log 1 Y = log Y is the entropy of a uniform distribution over Y bins. This prior prefers longer sentences which describe more information in the video. 60 CHANCE BLIND OUR HAND NV 0.155 0.265 0.545 0.748 ALL 0.099 0.198 0.639 0.786 Table 3: F1 scores of different methods. Figure 2: ROC curves of trained models and handwritten models. The scores are thresholded to decide hits, which together with the manual annotations, can generate TP, TN, FP, and FN counts. We select the threshold that leads to the maximal F1 score on the training set, use this threshold to compute F1 scores on the test set in each fold, and average F1 scores across the folds. The F1 scores are listed in the column labeled Our in Table 3. For comparison, we also report F1 scores for three baselines: Chance, Blind, and Hand. The Chance baseline randomly classifies a video-sentence pair as a hit with probability 0.5. The Blind baseline determines hits by potentially looking at the sentence but never looking at the video. We can find an upper bound on the F1 score that any blind method could have on each of our test sets by solving a 0-1 fractional programming problem (Dinkelbach, 1967) (see Appendix A for details). The Hand baseline determines hits with hand-coded HMMs, carefully designed to yield what we believe is near-optimal performance. As can be seen from Table 3, our trained models perform substantially better than the Chance and Blind baselines and approach the performance of the ideal Hand baseline. One can further see from the ROC curves in Figure 2, comparing the trained and hand-written models on both NV and ALL, that the trained models are close to optimal. Note that performance on ALL exceeds that on NV with the trained models. This is because longer sentences with varied parts of speech incorporate more information into the scoring process. 7 Conclusion We presented a method that learns word meanings from video paired with sentences. Unlike prior work, our method deals with realistic video scenes labeled with whole sentences, not individual words labeling hand delineated objects or events. The experiment shows that it can correctly learn the meaning representations in terms of HMM parameters for our lexical entries, from highly ambiguous training data. Our maximumlikelihood method makes use of only positive sentential labels. As such, it might require more training data for convergence than a method that also makes use of negative training sentences that are not true of a given video. Such can be handled with discriminative training, a topic we plan to address in the future. We believe that this will allow learning larger lexicons from more complex video without excessive amounts of training data. Acknowledgments This research was sponsored by the Army Research Laboratory and was accomplished under Cooperative Agreement Number W911NF-10-20060. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either express or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes, notwithstanding any copyright notation herein. A An Upper Bound on the F1 Score of any Blind Method A Blind algorithm makes identical decisions on the same sentence paired with different video clips. An optimal algorithm will try to find a decision si for each test sentence i that maximizes the F1 score. Suppose, the ground-truth yields FPi false positives and TPi true positives on the test set when si = 1. Also suppose that setting si = 0 yields FNi false negatives. Then the F1 score is F1 = 1 1 + P i siFPi + (1 −si)FNi P i 2siTPi | {z } ∆ Thus we want to minimize the term ∆. This is an instance of a 0-1 fractional programming problem which can be solved by binary search or Dinkelbach’s algorithm (Dinkelbach, 1967). 61 References A. Barbu, A. Bridge, Z. Burchill, D. Coroian, S. Dickinson, S. Fidler, A. Michaux, S. Mussman, N. Siddharth, D. Salvi, L. Schmidt, J. Shangguan, J. M. Siskind, J. Waggoner, S. Wang, J. Wei, Y. Yin, and Z. Zhang. 2012a. Video in sentences out. In Proceedings of the Twenty-Eighth Conference on Uncertainty in Artificial Intelligence, pages 102–112. A. Barbu, N. Siddharth, A. Michaux, and J. M. Siskind. 2012b. Simultaneous object detection, tracking, and event recognition. Advances in Cognitive Systems, 2:203–220, December. L. E. Baum and T. Petrie. 1966. Statistical inference for probabilistic functions of finite state Markov chains. The Annals of Mathematical Statistics, 37:1554–1563. L. E. Baum, T. Petrie, G. Soules, and N. Weiss. 1970. A maximization technique occuring in the statistical analysis of probabilistic functions of Markov chains. The Annals of Mathematical Statistics, 41(1):164– 171. L. E. Baum. 1972. An inequality and associated maximization technique in statistical estimation of probabilistic functions of a Markov process. Inequalities, 3:1–8. J. Bilmes. 1997. A gentle tutorial of the EM algorithm and its application to parameter estimation for Gaussian mixture and hidden Markov models. Technical Report TR-97-021, ICSI. M. Brand, N. Oliver, and A. Pentland. 1997. Coupled hidden Markov models for complex action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 994–999. D. L. Chen and R. J. Mooney. 2008. Learning to sportscast: A test of grounded language acquisition. In Proceedings of the 25th International Conference on Machine Learning. W. Dinkelbach. 1967. On nonlinear fractional programming. Management Science, 13(7):492–498. P. F. Dominey and J.-D. Boucher. 2005. Learning to talk about events from narrated video in a construction grammar framework. Artificial Intelligence, 167(12):31–61. P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan. 2010a. Object detection with discriminatively trained part-based models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(9):1627–1645. P. F. Felzenszwalb, R. B. Girshick, and D. A. McAllester. 2010b. Cascade object detection with deformable part models. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2241–2248. G. Kulkarni, V. Premraj, S. Dhar, S. Li, Y. Choi, A. C. Berg, and T. L. Berg. 2011. Baby talk: Understanding and generating simple image descriptions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1601–1608. P. Kuznetsova, V. Ordonez, A. C. Berg, T. L. Berg, and Y. Choi. 2012. Collective generation of natural image descriptions. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers - Volume 1, pages 359–368. T. Kwiatkowski, S. Goldwater, L. Zettlemoyer, and M. Steedman. 2012. A probabilistic model of syntactic and semantic acquisition from child-directed utterances and their meanings. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 234– 244. V. Ordonez, G. Kulkarni, and T. L. Berg. 2011. Im2text: Describing images using 1 million captioned photographs. In Proceedings of Neural Information Processing Systems. D. Roy. 2002. Learning visually-grounded words and syntax for a scene description task. Computer Speech and Language, 16:353–385. S. Sadanand and J. J. Corso. 2012. Action bank: A high-level representation of activity in video. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1234–1241. J. M. Siskind and Q. Morris. 1996. A maximumlikelihood approach to visual event classification. In Proceedings of the Fourth European Conference on Computer Vision, pages 347–360. J. M. Siskind. 1996. A computational study of crosssituational techniques for learning word-to-meaning mappings. Cognition, 61:39–91. T. Starner, J. Weaver, and A. Pentland. 1998. Realtime American Sign Language recognition using desk and wearable computer based video. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(12):1371–1375. A. J. Viterbi. 1967. Error bounds for convolutional codes and an asymtotically optimum decoding algorithm. IEEE Transactions on Information Theory, 13:260–267. A. Viterbi. 1971. Convolutional codes and their performance in communication systems. IEEE Transactions on Communication Technology, 19(5):751– 772. J. Yamoto, J. Ohya, and K. Ishii. 1992. Recognizing human action in time-sequential images using hidden Markov model. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 379–385. 62 B. Z. Yao, X. Yang, L. Lin, M. W. Lee, and S.-C. Zhu. 2010. I2T: Image parsing to text description. Proceedings of the IEEE, 98(8):1485–1508, August. C. Yu and D. H. Ballard. 2004. On the integration of grounding language and learning objects. In Proceedings of the 19th National Conference on Artifical intelligence, pages 488–493. S. Zhong and J. Ghosh. 2001. A new formulation of coupled hidden Markov models. Technical report, Department of Electrical and Computer Engineering, The University of Texas at Austin. 63
2013
6
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 615–621, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Decipherment Complexity in 1:1 Substitution Ciphers Malte Nuhn and Hermann Ney Human Language Technology and Pattern Recognition Computer Science Department, RWTH Aachen University, Aachen, Germany <surname>@cs.rwth-aachen.de Abstract In this paper we show that even for the case of 1:1 substitution ciphers—which encipher plaintext symbols by exchanging them with a unique substitute—finding the optimal decipherment with respect to a bigram language model is NP-hard. We show that in this case the decipherment problem is equivalent to the quadratic assignment problem (QAP). To the best of our knowledge, this connection between the QAP and the decipherment problem has not been known in the literature before. 1 Introduction The decipherment approach for MT has recently gained popularity for training and adapting translation models using only monolingual data. The general idea is to find those translation model parameters that maximize the probability of the translations of a given source text in a given language model of the target language. In general, the process of translation has a wide range of phenomena like substitution and reordering of words and phrases. In this paper we only study models that substitute tokens—i.e. words or letters—with a unique substitute. It therefore serves as a very basic case for decipherment and machine translation. Multiple techniques like integer linear programming (ILP), A∗search, genetic algorithms, and Bayesian inference have been used to tackle the decipherment problem for 1:1 substitution ciphers. The existence of such a variety of different approaches for solving the same problem already shows that there is no obvious way to solve the problem optimally. In this paper we show that decipherment of 1:1 substitution ciphers is indeed NP-hard and thus explain why there is no single best approach to the problem. The literature on decipherment provides surprisingly little on the analysis of the complexity of the decipherment problem. This might be related to the fact that a statistical formulation of the decipherment problem has not been analyzed with respect to n-gram language models: This paper shows the close relationship of the decipherment problem to the quadratic assignment problem. To the best of our knowledge the connection between the decipherment problem and the quadratic assignment problem was not known. The remainder of this paper is structured as follows: In Section 2 we review related work. Section 3 introduces the decipherment problem and describes the notation and definitions used throughout this paper. In Section 4 we show that decipherment using a unigram language model corresponds to solving a linear sum assignment problem (LSAP). Section 5 shows the connection between the quadratic assignment problem and decipherment using a bigram language model. Here we also give a reduction of the traveling salesman problem (TSP) to the decipherment problem to highlight the additional complexity in the decipherment problem. 2 Related Work In recent years a large number of publications on the automatic decipherment of substitution ciphers has been published. These publications were mostly dominated by rather heuristic methods and did not provide a theoretical analysis of the complexity of the decipherment problem: (Knight and Yamada, 1999) and (Knight et al., 2006) use the EM algorithm for various decipherment problems, like e.g. word substitution ciphers. (Ravi and Knight, 2008) and (Corlett and Penn, 2010) are able to obtain optimal (i.e. without search errors) decipherments of short cryptograms given an n615 gram language model. (Ravi and Knight, 2011), (Nuhn et al., 2012), and (Dou and Knight, 2012) treat natural language translation as a deciphering problem including phenomena like reordering, insertion, and deletion and are able to train translation models using only monolingual data. In this paper we will show the connection between the decipherment problem and the linear sum assignment problem as well as the quadratic assignment problem: Regarding the linear sum assignment problem we will make use of definitions presented in (Burkard and ela, 1999). Concerning the quadratic assignment problem we will use basic definitions from (Beckmann and Koopmans, 1957). Further (Burkard et al., 1998) gives a good overview over the quadratic assignment problem, including different formulations, solution methods, and an analysis of computational complexity. The paper also references a vast amount of further literature that might be interesting for future research. 3 Definitions In the following we will use the machine translation notation and denote the ciphertext with fN 1 = f1 . . . fj . . . fN which consists of cipher tokens fj ∈Vf. We denote the plaintext with eN 1 = e1 . . . ei . . . eN (and its vocabulary Ve respectively). We define e0 = f0 = eN+1 = fN+1 = $ (1) with “$” being a special sentence boundary token. We use the abbreviations V e = Ve ∪{$} and V f respectively. A general substitution cipher uses a table s(e|f) which contains for each cipher token f a probability that the token f is substituted with the plaintext token e. Such a table for substituting cipher tokens {A, B, C, D} with plaintext tokens {a, b, c, d} could for example look like a b c d A 0.1 0.2 0.3 0.4 B 0.4 0.2 0.1 0.3 C 0.4 0.1 0.2 0.3 D 0.3 0.4 0.2 0.1 The 1:1 substitution cipher encrypts a given plaintext into a ciphertext by replacing each plaintext token with a unique substitute: This means that the table s(e|f) contains all zeroes, except for one “1.0” per f ∈Vf and one “1.0” per e ∈Ve. For example the text abadcab would be enciphered to BCBADBC when using the substitution a b c d A 0 0 0 1 B 1 0 0 0 C 0 1 0 0 D 0 0 1 0 We formalize the 1:1 substitutions with a bijective function φ : Vf →Ve. The general decipherment goal is to obtain a mapping φ such that the probability of the deciphered text is maximal: ˆφ = arg max φ p(φ(f1)φ(f2)φ(f3)...φ(fN)) (2) Here p(. . . ) denotes the language model. Depending on the structure of the language model Equation 2 can be further simplified. Given a ciphertext fN 1 , we define the unigram count Nf of f ∈V f as1 Nf = N+1 X i=0 δ(f, fi) (3) This implies that Nf are integer counts > 0. We similarly define the bigram count Nff′ of f, f′ ∈ V f as Nff′ = N+1 X i=1 δ(f, fi−1) · δ(f′, fi) (4) This definition implies that (a) Nff′ are integer counts > 0 of bigrams found in the ciphertext fN 1 . (b) Given the first and last token of the cipher f1 and fN, the bigram counts involving the sentence boundary token $ need to fulfill N$f = δ(f, f1) (5) Nf$ = δ(f, fN) (6) (c) For all f ∈Vf X f′∈Vf Nff′ = X f′∈Vf Nf′f (7) must hold. 1Here δ denotes the Kronecker delta. 616 Similarly, we define language model matrices S for the unigram and the bigram case. The unigram language model Sf is defined as Sf = log p(f) (8) which implies that (a) Sf are real numbers with Sf ∈[−∞, 0] (9) (b) The following normalization constraint holds: X f∈Vf exp(Sf) = 1 (10) Similarly for the bigram language model matrix Sff′, we define Sff′ = log p(f′|f) (11) This definition implies that (a) Sff′ are real numbers with Sff′ ∈[−∞, 0] (12) (b) For the sentence boundary symbol, it holds that S$$ = −∞ (13) (c) For all f ∈Vf the following normalization constraint holds: X f′∈Vf exp(Sff′) = 1 (14) 4 Decipherment Using Unigram LMs 4.1 Problem Definition When using a unigram language model, Equation 2 simplifies to finding ˆφ = arg max φ N Y i=1 p(φ(fi)) (15) which can be rewritten as ˆφ = arg max φ X f∈Vf NfSφ(f) (16) When defining cff′ = Nf log p(f′), for f, f′ ∈ Vf, Equation 16 can be brought into the form of ˆφ = arg max φ X f∈Vf cfφ(f) (17) Figure 1 shows an illustration of this problem. A B C a b c Ve Vf cij A B C a NA log p(a) NB log p(a) NC log p(a) b NA log p(b) NB log p(b) NC log p(b) c NA log p(c) NB log p(c) NC log p(c) Figure 1: Linear sum assignment problem for a cipher with Ve = {a, b, c}, Vf = {A, B, C}, unigram counts Nf, and unigram probabilities p(e). 4.2 The Linear Sum Assignment Problem The practical problem behind the linear sum assignment problem can be described as follows: Given jobs {j1, . . . , jn} and workers {w1, . . . , wn}, the task is to assign each job ji to a worker wj. Each assignment incurs a cost cij and the total cost for assigning all jobs and workers is to be minimized. This can be formalized as finding the assignment ˆφ = arg min φ n X i=1 ciφ(i) (18) The general LSAP can be solved in polynomial time using the Hungarian algorithm (Kuhn, 1955). However, since the matrix cij occurring for the decipherment using a unigram language model can be represented as the product cij = ai · bj the decipherment problem can be solved more easily: In the Section “Optimal Matching”, (Bauer, 2010) shows that in this case the optimal assignment is found by sorting the jobs ji by ai and workers wj by bj and then assigning the jobs ji to workers wj that have the same rank in the respective sorted lists. Sorting and then assigning the elements can be done in O(n log n). 5 Decipherment Using Bigram LMs 5.1 Problem Definition When using a 2-gram language model, Equation 2 simplifies to ˆφ = arg max φ    N+1 Y j=1 p(φ(fj)|φ(fj−1))   (19) 617 x y l1 l2 l3 l4 Assignments l1 l2 l3 l4 (a) f1 f2 f3 f4 (b) f1 f4 f3 f2 Flows f1 f2 f3 f4 f1 1 f2 1 f3 1 f4 1 Figure 2: Hypothetical quadratic assignment problem with locations l1 . . . l4 and facilities f1 . . . f4 with all flows being zero except f1 ↔f2 and f3 ↔f4. The distance between locations l1 . . . l4 is implicitly given by the locations in the plane, implying a euclidean metric. Two example assignments (a) and (b) are shown, with (b) having the lower overall costs. Using the definitions from Section 3, Equation 19 can be rewritten as ˆφ = arg max φ    X f∈Vf X f′∈Vf Nff′Sφ(f)φ(f′)   (20) (Bauer, 2010) arrives at a similar optimization problem for the “combined method of frequency matching” using bigrams and mentions that it can be seen as a combinatorial problem for which an efficient way of solving is not known. However, he does not mention the close connection to the quadratic assignment problem. 5.2 The Quadratic Assignment Problem The quadratic assignment problem was introduced by (Beckmann and Koopmans, 1957) for the following real-life problem: Given a set of facilities {f1, . . . , fn} and a set of locations {l1, . . . , ln} with distances for each pair of locations, and flows for each pair of facilities (e.g. the amount of supplies to be transported between a pair of facilities) the problem is to assign the facilities to locations such that the sum of the distances multiplied by the corresponding flows (which can be interpreted as total transportation costs) is minimized. This is visualized in Figure 2. Following (Beckmann and Koopmans, 1957) we can express the quadratic assignment problem as finding ˆφ = arg min φ    n X i=1 n X j=1 aijbφ(i)φ(j) + n X i=1 ciφ(i)    (21) where A = (aij), B = (bij), C = (cij) ∈Nn×n and φ a permutation φ : {1, . . . , n} →{1, . . . , n}. (22) This formulation is often referred to as KoopmanBeckman QAP and often abbreviated as QAP(A, B, C). The so-called pure or homogeneous QAP ˆφ = arg min φ    n X i=1 n X j=1 aijbφ(i)φ(j)    (23) is obtained by setting cij = 0, and is often denoted as QAP(A, B). In terms of the real-life problem presented in (Beckmann and Koopmans, 1957) the matrix A can be interpreted as distance matrix for locations {l1 . . . ln} and B as flow matrix for facilities {f1 . . . fn}. (Sahni and Gonzalez, 1976) show that the quadratic assignment problem is strongly NPhard. We will now show the relation between the quadratic assignment problem and the decipherment problem. 5.3 Decipherment Problem ⪯Quadratic Assignment Problem Every decipherment problem is directly a quadratic assignment problem, since the matrices Nff′ and Sff′ are just special cases of the general matrices A and B required for the quadratic assignment problem. Thus a reduction from the decipherment problem to the quadratic assignment problem is trivial. This means that all algorithms capable of solving QAPs can directly be used to solve the decipherment problem. 5.4 Quadratic Assignment Problem ⪯ Decipherment Problem Given QAP(A, B) with integer matrices A = (aij), B = (bij) i, j ∈{1, . . . , n} we construct the count matrix Nff′ and language model matrix Sff′ in such a way that the solution for the decipherment problem implies the solution to the 618 quadratic assignment problem, and vice versa. We will use the vocabularies V e = V f = {1, . . . , n + 3}, with n + 3 being the special sentence boundary token “$”. The construction of Nff′ and Sff′ is shown in Figure 3. To show the validity of our construction, we will 1. Show that Nff′ is a valid count matrix. 2. Show that Sff′ is a valid bigram language model matrix. 3. Show that the decipherment problem and the newly constructed quadratic assignment problem are equivalent. We start by showing that Nff′ is a valid count matrix: (a) By construction, Nff′ has integer counts that are greater or equal to 0. (b) By construction, Nff′ at boundaries is: • N$f = δ(f, 1) • Nf$ = δ(f, n + 2) (c) Regarding the properties P f′ Nff′ = P f′ Nf′f: • For all f ∈{1, . . . , n} the count properties are equivalent to ˜af∗+ X f′ ˜aff′ = ˜a∗f + X f′ ˜af′f + δ(f, 1) (24) which holds by construction of ˜a∗f and ˜af∗. • For f = n+1 the count property is equivalent to 1 + X f′ ˜af′∗= 2 + X f′ ˜a∗f′ (25) which follows from Equation (24) by summing over all f ∈{1, . . . , n}. • For f = n + 2 and f = n + 3, the condition is fulfilled by construction. We now show that Sff′ is a valid bigram language model matrix: (a) By construction, Sff′ ∈[−∞, 0] holds. (b) By construction, S$$ = −∞holds. (c) By the construction of ˜bf∗, the values Sff′ fulfill P f′ exp(Sff′) = 1 for all f. This works since all entries ˜bff′ are chosen to be smaller than −log(n + 2). We now show the equivalence of the quadratic assignment problem and the newly constructed decipherment problem. For this we will use the definitions ˜A = {1, . . . , n} (26) ˜B = {n + 1, n + 2, n + 3} (27) We first show that solutions of the constructed decipherment problem with score > −∞fulfill φ(f) = f for f ∈˜B. All mappings φ, with φ(f) = f′ for any f ∈ ˜A and f′ ∈˜B will induce a score of −∞since for f ∈ ˜A all Nff > 0 and Sf′f′ = −∞for f′ ∈˜B. Thus any φ with score > −∞will fulfill φ(f) ∈˜B for f ∈˜B. Further, by enumerating all six possible permutations, it can be seen that only the φ with φ(f) = f for f ∈˜B induces a score of > −∞. Thus we can rewrite n+3 X f=1 n+3 X f′=1 Nff′Sφ(f)φ(f′) (28) to X f∈˜ A X f∈˜ A Nff′Sφ(f)φ(f′) | {z } (AA) + X f∈˜ A X f′∈˜B Nff′Sφ(f)f′ | {z } (AB) + X f∈˜B X f′∈˜ A Nff′Sfφ(f′) | {z } (BA) + X f∈˜B X f′∈˜B Nff′Sff′ | {z } (BB) Here • (AB) is independent of φ since ∀f ∈˜A, f′ ∈{n + 1, n + 3} : Sff′ = S1f′ (29) and ∀f ∈˜A : Nf,n+2 = 0 (30) • (BA) is independent of φ since ∀f′ ∈˜A, f ∈˜B : Sff′ = Sf1 (31) • (BB) is independent of φ. 619 Nff′ =            ˜a11 ˜a12 · · · ˜a1n ˜a1∗0 0 ˜a21 ˜a22 · · · ˜a2n ˜a2∗0 0 ... ... ... ... ... ... ... ˜an1 ˜an2 · · · ˜ann ˜an∗0 0 ˜a∗1 ˜a∗2 · · · ˜a∗n 0 2 0 0 0 · · · 0 1 0 1 1 0 · · · 0 0 0 0            Sff′ =             ˜b11 ˜b12 · · · ˜b1n ε2 ˜b1∗ ε2 ˜b21 ˜b22 · · · ˜b2n ε2 ˜b2∗ ε2 ... ... ... ... ... ... ... ˜bn1 ˜bn2 · · · ˜bnn ε2 ˜bn∗ ε2 ε1 ε1 · · · ε1 −∞ ε1 −∞ ε2 ε2 · · · ε2 ε2 −∞ ε2 ε0 ε0 · · · ε0 −∞−∞−∞             ˜aff′ = aff′ −min ˜f ˜f′ {a ˜f ˜f′} + 1 ˜bff′ = bff′ −max ˜f ˜f′ {b ˜f ˜f′} −log(n + 2) ˜af∗= max    n X f′=1 af′f −aff′, 0   + δ(f, 1) ˜bf∗= log  1 − n X f′=1 exp(˜bff′) − 2 n + 2   ˜a∗f′ = max    n X f=1 aff′ −af′f, 0    εi = −log(n + i) Figure 3: Construction of matrices Nff′ and Sff′ of the decipherment problem from matrices A = (aij) and B = (bij) of the quadratic assignment problem QAP(A, B). Thus, with some constant c, we can finally rewrite Equation 28 as c + n X f=1 n X f′=1 Nff′Sφ(f)φ(f′) (32) Inserting the definition of Nff′ and Sff′ (simplified using constants c′, and c′′) we obtain c + n X f=1 n X f′=1 (aff′ + c′)(bφ(f)φ(f′) + c′′) (33) which is equivalent to the original quadratic assignment problem arg max    n X f=1 n X f′=1 aff′bφ(f)φ(f′)    (34) Thus we have shown that a solution to the quadratic assignment problem in Equation 34 is a solution to the decipherment problem in Equation 20 and vice versa. Assuming that calculating elementary functions can be done in O(1), setting up Nff′ and Sff′ can be done in polynomial time.2 Thus we have given a polynomial time reduction from the quadratic assignment problem to 2This is the case if we only require a fixed number of digits precision for the log and exp operations. the decipherment problem: Since the quadratic assignment problem is NP-hard, it follows that the decipherment problem is NP-hard, too. 5.5 Traveling Salesman Problem ⪯ Decipherment Problem Using the above construction we can immediately construct a decipherment problem that is equivalent to the traveling salesman problem by using the quadratic assignment problem formulation of the traveling salesman problem. Without loss of generality3 we assume that the TSP’s distance matrix fulfills the constraints of a bigram language model matrix Sff′. Then the count matrix Nff′ needs to be chosen as Nff′ =            0 1 0 · · · 0 0 0 0 0 1 · · · 0 0 0 0 0 0 · · · 0 0 0 ... ... ... ... ... ... ... 0 0 0 · · · 0 1 0 0 0 0 · · · 0 0 1 1 0 0 · · · 0 0 0            (35) which fulfills the constraints of a bigram count matrix. 3The general case can be covered using the reduction shown in Section 5. 620 This matrix corresponds to a ciphertext of the form $abcd$ (36) and represents the tour of the traveling salesman in an intuitive way. The mapping φ then only decides in which order the cities are visited, and only costs between two successive cities are counted. This shows that the TSP is only a special case of the decipherment problem. 6 Conclusion We have shown the correspondence between solving 1:1 substitution ciphers and the linear sum assignment problem and the quadratic assignment problem: When using unigram language models, the decipherment problem is equivalent to the linear sum assignment problem and solvable in polynomial time. For a bigram language model, the decipherment problem is equivalent to the quadratic assignment problem and is NP-hard. We also pointed out that all available algorithms for the quadratic assignment problem can be directly used to solve the decipherment problem. To the best of our knowledge, this correspondence between the decipherment problem and the quadratic assignment problem has not been known previous to our work. Acknowledgements This work was partly realized as part of the Quaero Programme, funded by OSEO, French State agency for innovation. References Friedrich L. Bauer. 2010. Decrypted Secrets: Methods and Maxims of Cryptology. Springer, 4th edition. Martin J. Beckmann and Tjalling C. Koopmans. 1957. Assignment problems and the location of economic activities. Econometrica, 25(4):53–76. Rainer E. Burkard and Eranda ela. 1999. Linear assignment problems and extensions. In Handbook of Combinatorial Optimization - Supplement Volume A, pages 75–149. Kluwer Academic Publishers. Rainer E. Burkard, Eranda ela, Panos M. Pardalos, and Leonidas S. Pitsoulis. 1998. The quadratic assignment problem. In Handbook of Combinatorial Optimization, pages 241–338. Kluwer Academic Publishers. Eric Corlett and Gerald Penn. 2010. An exact A* method for deciphering letter-substitution ciphers. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL), pages 1040–1047, Uppsala, Sweden, July. The Association for Computer Linguistics. Qing Dou and Kevin Knight. 2012. Large scale decipherment for out-of-domain machine translation. In Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 266–275, Jeju Island, Korea, July. Association for Computational Linguistics. Kevin Knight and Kenji Yamada. 1999. A computational approach to deciphering unknown scripts. In Proceedings of the ACL Workshop on Unsupervised Learning in Natural Language Processing, number 1, pages 37–44. Association for Computational Linguistics. Kevin Knight, Anish Nair, Nishit Rathod, and Kenji Yamada. 2006. Unsupervised analysis for decipherment problems. In Proceedings of the Conference on Computational Linguistics and Association of Computation Linguistics (COLING/ACL) Main Conference Poster Sessions, pages 499–506, Sydney, Australia, July. Association for Computational Linguistics. Harold W. Kuhn. 1955. The Hungarian method for the assignment problem. Naval Research Logistic Quarterly, 2(1-2):83–97. Malte Nuhn, Arne Mauser, and Hermann Ney. 2012. Deciphering foreign language by combining language models and context vectors. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (ACL), pages 156–164, Jeju, Republic of Korea, July. Association for Computational Linguistics. Sujith Ravi and Kevin Knight. 2008. Attacking decipherment problems optimally with low-order ngram models. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 812–819, Honolulu, Hawaii. Association for Computational Linguistics. Sujith Ravi and Kevin Knight. 2011. Deciphering foreign language. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACLHLT), pages 12–21, Portland, Oregon, USA, June. Association for Computational Linguistics. Sartaj Sahni and Teofilo Gonzalez. 1976. P-complete approximation problems. Journal of the Association for Computing Machinery (JACM), 23(3):555–565, July. 621
2013
60
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 622–630, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Non-Monotonic Sentence Alignment via Semisupervised Learning Xiaojun Quan, Chunyu Kit and Yan Song Department of Chinese, Translation and Linguistics City University of Hong Kong, HKSAR, China {xiaoquan,ctckit,[yansong]}@[student.]cityu.edu.hk Abstract This paper studies the problem of nonmonotonic sentence alignment, motivated by the observation that coupled sentences in real bitexts do not necessarily occur monotonically, and proposes a semisupervised learning approach based on two assumptions: (1) sentences with high affinity in one language tend to have their counterparts with similar relatedness in the other; and (2) initial alignment is readily available with existing alignment techniques. They are incorporated as two constraints into a semisupervised learning framework for optimization to produce a globally optimal solution. The evaluation with realworld legal data from a comprehensive legislation corpus shows that while existing alignment algorithms suffer severely from non-monotonicity, this approach can work effectively on both monotonic and non-monotonic data. 1 Introduction Bilingual sentence alignment is a fundamental task to undertake for the purpose of facilitating many important natural language processing applications such as statistical machine translation (Brown et al., 1993), bilingual lexicography (Klavans et al., 1990), and cross-language information retrieval (Nie et al., 1999). Its objective is to identify correspondences between bilingual sentences in given bitexts. As summarized by Wu (2010), existing sentence alignment techniques rely mainly on sentence length and bilingual lexical resource. Approaches based on the former perform effectively on cognate languages but not on the others. For instance, the statistical correlation of sentence length between English and Chinese is not as high as that between two IndoEuropean languages (Wu, 1994). Lexicon-based approaches resort to word correspondences in a bilingual lexicon to match bilingual sentences. A few sentence alignment methods and tools have also been explored to combine the two. Moore (2002) proposes a multi-pass search procedure using both sentence length and an automaticallyderived bilingual lexicon. Hunalign (Varga et al., 2005) is another sentence aligner that combines sentence length and a lexicon. Without a lexicon, it backs off to a length-based algorithm and then automatically derives a lexicon from the alignment result. Soon after, Ma (2006) develops the lexicon-based aligner Champollion, assuming that different words have different importance in aligning two sentences. Nevertheless, most existing approaches to sentence alignment follow the monotonicity assumption that coupled sentences in bitexts appear in a similar sequential order in two languages and crossings are not entertained in general (Langlais et al., 1998; Wu, 2010). Consequently the task of sentence alignment becomes handily solvable by means of such basic techniques as dynamic programming. In many scenarios, however, this prerequisite monotonicity cannot be guaranteed. For example, bilingual clauses in legal bitexts are often coordinated in a way not to keep the same clause order, demanding fully or partially crossing pairings. Figure 1 shows a real excerpt from a legislation corpus. Such monotonicity seriously impairs the existing alignment approaches founded on the monotonicity assumption. This paper is intended to explore the problem of non-monotonic alignment within the framework of semisupervised learning. Our approach is motivated by the above observation and based on the following two assumptions. First, monolingual sentences with high affinity are likely to have their translations with similar relatedness. Following this assumption, we propose the conception of monolingual consistency which, to the best of 622 British Overseas citizen" (劙⚳㴟⢾℔㮹) means a person who has the status of a British Overseas citizen under the British Nationality Act 1981 (1981 c. 61 U.K.) British protected person" (⍿劙⚳ᾅ嬟Ṣ⢓) means a person who has the status of a British protected person under the British Nationality Act 1981 (1981 c. 61 U.K.) ... 1. Interpretation of words and expressions British citizen" (劙⚳℔㮹) means a person who has the status of a British citizen under the British Nationality Act 1981 (1981 c. 61 U.K.) British Dependent Territories citizen" (劙⚳Ⱄ⛇℔㮹) means a person who has or had the status of a British Dependent Territories citizen under the British Nationality Act 1981 (1981 c. 61 U.K.) British enactment" and "imperial enactment" (劙⚳ㆸ㔯㱽⇯) Mean(a) any Act of Parliament; (b) any Order in Council; and (c) any rule, regulation, proclamation, order, notice, rule of court, by-law or other instrument made under or by virtue of any such Act or Order in Council 劙⚳㴟⢾℔㮹ȿ(British Overseas citizen) ㊯㟡㒂˪1981⸜劙⚳⚳䯵 㱽Ẍ˫(1981 c. 61 U.K.)℟㚱劙⚳㴟⢾℔㮹幓↮䘬Ṣ 劙⚳Ⱄ⛇℔㮹ȿ(British Dependent Territories citizen) ㊯㟡㒂˪1981 ⸜劙⚳⚳䯵㱽Ẍ˫(1981 c. 61 U.K.)℟㚱ㆾ㚦℟㚱劙⚳Ⱄ⛇℔㮹幓 ↮䘬Ṣ 1.娆婆␴娆⎍䘬慳佑 ⍿劙⚳ᾅ嬟Ṣ⢓ȿĩŃųŪŵŪŴũġűųŰŵŦŤŵŦťġűŦųŴŰůĪġ㊯㟡㒂˪IJĺĹIJ⸜劙⚳⚳ 䯵㱽Ẍ˫ĩIJĺĹIJġŤįġķIJġŖįŌįĪ℟㚱⍿劙⚳ᾅ嬟Ṣ⢓幓↮䘬Ṣ 劙⚳℔㮹ȿ(British citizen) ㊯㟡㒂˪1981⸜劙⚳⚳䯵㱽Ẍ˫(1981 c. 61 U.K.)℟㚱劙⚳℔㮹幓↮䘬Ṣ 劙⚳ㆸ㔯㱽⇯ȿ(British enactment, imperial enactment) ㊯ʇ(a)ảỽ ⚳㚫忂忶䘬㱽Ẍ烊(b)ảỽ㧆⭮昊枺Ẍ烊⍲(c)㟡㒂ㆾㄹ啱ảỽ娚䫱 㱽Ẍㆾ㧆⭮昊枺Ẍ侴妪䩳䘬ảỽ夷⇯ˣ夷ἳˣ㔯⏲ˣ␥Ẍˣ℔ ⏲ˣ㱽昊夷⇯ˣ旬ἳㆾ℞Ṿ㔯㚠 ... " " " " " Ⱦ Ⱦ Ⱦ Ⱦ Ⱦ Figure 1: A real example of non-monotonic sentence alignment from BLIS corpus. our knowledge, has not been taken into account in any previous work of alignment. Second, initial alignment of certain quality can be obtained by means of existing alignment techniques. Our approach attempts to incorporate both monolingual consistency of sentences and bilingual consistency of initial alignment into a semisupervised learning framework to produce an optimal solution. Extensive evaluations are performed using real-world legislation bitexts from BLIS, a comprehensive legislation database maintained by the Department of Justice, HKSAR. Our experimental results show that the proposed method can work effectively while two representatives of existing aligners suffer severely from the non-monotonicity. 2 Methodology 2.1 The Problem An alignment algorithm accepts as input a bitext consisting of a set of source-language sentences, S = {s1, s2, . . . , sm}, and a set of targetlanguage sentences, T = {t1, t2, . . . , tn}. Different from previous works relying on the monotonicity assumption, our algorithm is generalized to allow the pairings of sentences in S and T to cross arbitrarily. Figure 2(a) illustrates monotonic alignment with no crossing correspondences in a bipartite graph and 2(b) non-monotonic alignment with scrambled pairings. Note that it is relatively straightforward to identify the type of manyto-many alignment in monotonic alignment using techniques such as dynamic programming if there is no scrambled pairing or the scrambled pairings are local, limited to a short distance. However, the situation of non-monotonic alignment is much more complicated. Sentences to be merged into a bundle for matching against another bundle in the other language may occur consecutively or discontinuously. For the sake of simplicity, we will not consider non-monotonic alignment with many-tomany pairings but rather assume that each sentence may align to only one or zero sentence in the other language. Let F represent the correspondence relation between S and T , and therefore F ⊂S × T . Let matrix F denote a specific alignment solution of F, where Fij is a real score to measure the likelihood of matching the i-th sentence si in S against the j-th sentence tj in T . We then define an alignment function A : F →A to produce the final alignment, where A is the alignment matrix for S and T , with Aij = 1 for a correspondence between si and tj and Aij = 0 otherwise. 2.2 Semisupervised Learning A semisupervised learning framework is introduced to incorporate the monolingual and bilingual consistency into alignment scoring Q(F) = Qm(F) + λQb(F), (1) where Qm(F) is the term for monolingual constraint to control the consistency of sentences with high affinities, Qb(F) for the constraint of initial alignment obtained with existing techniques, and λ is the weight between them. Then, the optimal alignment solution is to be derived by minimizing the cost function Q(F), i.e., F ∗= arg min F Q(F). (2) 623 s1 s2 s3 s4 s5 s6 t1 t2 t3 t4 t5 t6 (a) s1 s2 s3 s4 s5 s6 t1 t2 t3 t4 t5 t6 (b) Figure 2: Illustration of monotonic (a) and non-monotonic alignment (b), with a line representing the correspondence of two bilingual sentences. In this paper, Qm(F) is defined as 1 4 m X i,j=1 Wij n X k,l=1 Vkl Fik √DiiEkk − Fjl p DjjEll !2 , (3) where W and V are the symmetric matrices to represent the monolingual sentence affinity matrices in S and T , respectively, and D and E are the diagonal matrices with entries Dii = P j Wij and Eii = P j Vij. The idea behind (3) is that to minimize the cost function, the translations of those monolingual sentences with close relatedness reflected in W and V should also keep similar closeness. The bilingual constraint term Qb(F) is defined as Qb(F) = m X i=1 n X j=1  Fij −ˆAij 2 , (4) where ˆA is the initial alignment matrix obtained by A : ˆF →ˆA. Note that ˆF is the initial relation matrix between S and T . The monolingual constraint term Qm(F) defined above corresponds to the smoothness constraint in the previous semisupervised learning work by Zhou et al. (2004) that assigns higher likelihood to objects with larger similarity to share the same label. On the other hand, Qb(F) corresponds to their fitting constraint, which requires the final alignment to maintain the maximum consistency with the initial alignment. Taking the derivative of Q(F) with respect to F, we have ∂Q(F) ∂F = 2F −2SFT + 2λF −2λ ˆA, (5) where S and T are the normalized matrices of W and V , calculated by S = D−1/2WD−1/2 and T = E−1/2V E−1/2. Then, the optimal F ∗is to be found by solving the equation (1 + λ) F ∗−SF ∗T = λ ˆA, (6) which is equivalent to αF ∗−F ∗β = γ with α = (1 + λ) S−1, β = T and γ = λS−1 ˆA. This is in fact a Sylvester equation (Barlow et al., 1992), whose numerical solution can be found by many classical algorithms. In this research, it is solved using LAPACK,1 a software library for numerical linear algebra. Non-positive entries in F ∗ indicate unrealistic correspondences of sentences and are thus set to zero before applying the alignment function. 2.3 Alignment Function Once the optimal F ∗is acquired, the remaining task is to design an alignment function A to convert it into an alignment solution. An intuitive approach is to use a heuristic search for local optimization (Kit et al., 2004), which produces an alignment with respect to the largest scores in each row and each column. However, this does not guarantee a globally optimal solution. Figure 3 illustrates a mapping relation matrix onto an alignment matrix, which also shows that the optimal alignment cannot be achieved by heuristic search. Banding is another approach frequently used to convert a relation matrix to alignment (Kay and R¨oscheisen, 1993). It is founded on the observation that true monotonic alignment paths usually lie close to the diagonal of a relation matrix. However, it is not applicable to our task due to the nonmonotonicity involved. We opt for converting a relation matrix into specific alignment by solving 1http://www.netlib.org/lapack/ 624 alignment matrix relation matrix 2 1 2 4 3 5 6 7 1 3 4 5 6 0 0.4 0 0.5 0 0 0 0.3 0 0 0.6 0 0 0 0 0 0 0 0 0 0 0.4 0 0 0 0.2 0 0 0.5 0 0 0 0 0 0.6 0 0.1 0 0 0 0 0.8 2 1 2 4 3 5 6 7 1 3 4 5 6 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 Figure 3: Illustration of sentence alignment from relation matrix to alignment matrix. The scores marked with arrows are the best in each row/column to be used by the heuristic search. The right matrix represents the corresponding alignment matrix by our algorithm. the following optimization A = arg max X m X i=1 n X j=1 XijFij (7) s.t. m X i=1 Xij ≤1, n X j=1 Xij ≤1, Xij ∈{0, 1} This turns sentence alignment into a problem to be resolved by binary linear programming (BIP), which has been successfully applied to word alignment (Taskar et al., 2005). Given a scoring matrix, it guarantees an optimal solution. 2.4 Alignment Initialization Once the above alignment function is available, the initial alignment matrix ˆA can be derived from an initial relation matrix ˆF obtained by an available alignment method. This work resorts to another approach to initializing the relation matrix. In many genres of bitexts, such as government transcripts or legal documents, there are a certain number of common strings on the two sides of bitexts. In legal documents, for example, translations of many key terms are usually accompanied with their source terms. Also, common numberings can be found in enumerated lists in bitexts. These kinds of anchor strings provide quite reliable information to link bilingual sentences into pairs, and thus can serve as useful cues for sentence alignment. In fact, they can be treated as a special type of highly reliable “bilexicon”. The anchor strings used in this work are derived by searching the bitexts using word-level inverted indexing, a basic technique widely used in information retrieval (Baeza-Yates and Ribeiro-Neto, 2011). For each index term, a list of postings is created. Each posting includes a sentence identifier, the in-sentence frequency and positions of this term. The positions of terms are intersected to find common anchor strings. The anchor strings, once found, are used to calculate the initial affinity ˆFij of two sentences using Dice’s coefficient ˆFij = 2|C1i ∩C2j| |C1i| + |C2j| (8) where C1i and C2j are the anchor sets in si and tj, respectively, and | · | is the cardinality of a set. Apart from using anchor strings, other avenues for the initialization are studied in the evaluation section below, i.e., using another aligner and an existing lexicon. 2.5 Monolingual Affinity Although various kinds of information from a monolingual corpus have been exploited to boost statistical machine translation models (Liu et al., 2010; Su et al., 2012), we have not yet been exposed to any attempt to leverage monolingual sentence affinity for sentence alignment. In our framework, an attempt to this can be made through the computation of W and V . Let us take W as an example, where the entry Wij represents the affinity of sentence si and sentence sj, and it is set to 0 for i = j in order to avoid self-reinforcement during optimization (Zhou et al., 2004). When two sentences in S or T are not too short, or their content is not divergent in meaning, their semantic similarity can be estimated in terms of common words. Motivated by this, we define Wij (for i ̸= j) based on the Gaussian kernel as Wij = exp −1 2σ2  1 − vT i vj ∥vi∥∥vj∥ 2! (9) 625 where σ is the standard deviation parameter, vi and vj are vectors of si and sj with each component corresponding to the tf-idf value of a particular term in S (or T ), and ∥·∥is the norm of a vector. The underlying assumption here is that words appearing frequently in a small number of sentences but rarely in the others are more significant in measuring sentence affinity. Although semantic similarity estimation is a straightforward approach to deriving the two affinity matrices, other approaches are also feasible. An alternative approach can be based on sentence length under the assumption that two sentences with close lengths in one language tend to have their translations also with close lengths. 2.6 Discussion The proposed semisupervised framework for nonmonotonic alignment is in fact generalized beyond, and can also be applied to, monotonic alignment. Towards this, we need to make use of sentence sequence information. One way to do it is to incorporate sentence positions into Equation (1) by introducing a position constraint Qp(F) to enforce that bilingual sentences in closer positions should have a higher chance to match one another. For example, the new constraint can be defined as Qp(F) = m X i=1 n X j=1 |pi −qj|F 2 ij, where pi and qj are the absolute (or relative) positions of two bilingual sentences in their respective sequences. Another way follows the banding assumption that the actual couplings only appear in a narrow band along the main diagonal of relation matrix. Accordingly, all entries of F ∗outside this band are set to zero before the alignment function is applied. Kay and R¨oscheisen (1993) illustrate that this can be done by modeling the maximum deviation of true couplings from the diagonal as O(√n). 3 Evaluation 3.1 Data Set Our data set is acquired from the Bilingual Laws Information System (BLIS),2 an electronic database of Hong Kong legislation maintained by the Department of Justice, HKSAR. BLIS 2http://www.legislation.gov.hk provides Chinese-English bilingual texts of ordinances and subsidiary legislation in effect on or after 30 June 1997. It organizes the legal texts into a hierarchy of chapters, sections, subsections, paragraphs and subparagraphs, and displays the content of a such hierarchical construct (usually a section) on a single web page. By web crawling, we have collected in total 31,516 English and 31,405 Chinese web pages, forming a bilingual corpus of 31,401 bitexts after filtering out null pages. A text contains several to two hundred sentences. Many bitexts exhibit partially non-monotonic order of sentences. Among them, 175 bitexts are randomly selected for manual alignment. Sentences are identified based on punctuations. OpenNLP Tokenizer3 is applied to segment English sentences into tokens. For Chinese, since there is no reliable segmenter for this genre of text, we have to treat each Chinese character as a single token. In addition, to calculate the monolingual sentence affinity, stemming of English words is performed with the Porter Stemmer (Porter, 1980) after anchor string mining. The manual alignment of the evaluation data set is performed upon the initial alignment by Hunalign (Varga et al., 2005), an effective sentence aligner that uses both sentence length and a bilexicon (if available). For this work, Hunalign relies solely on sentence length. Its output is then double-checked and corrected by two experts in bilingual studies, resulting in a data set of 1747 1-1 and 70 1-0 or 0-1 sentence pairs. The standard deviation σ in (9) is an important parameter for the Gaussian kernel that has to be determined empirically (Zhu et al., 2003; Zhou et al., 2004). In addition, the Q function also involves another parameter λ to adjust the weight of the bilingual constraint. This work seeks an approach to deriving the optimal parameters without any external training data beyond the initial alignment. A three-fold cross-validation is thus performed on the initial 1-1 alignment and the parameters that give the best average performance are chosen. 3.2 Monolingual Consistency To demonstrate the validity of the monolingual consistency, the semantic similarity defined by vT i vj ∥vi∥∥vj∥is evaluated as follows. 500 pairs of English sentences with the highest similarities are selected, excluding null pairings (1-0 or 0-1 type). 3http://opennlp.apache.org/ 626 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 Similarity of English sentence pair Similarity of Chinese sentence pair Figure 4: Demonstration of monolingual consistency. The horizontal axis is the similarity of English sentence pairs and the vertical is the similarity of the corresponding pairs in Chinese. Type Total initAlign NonmoAlign Pred Corr Pred Corr 1-0 70 662 66 70 50 1-1 1747 1451 1354 1747 1533 Table 1: Performance of the initial alignment and our aligner, where the Pred and Corr columns are the numbers of predicted and correct pairings. All of these high-affinity pairs have a similarity score higher than 0.72. A number of duplicate sentences (e.g., date) with exceptionally high similarity 1.0 are dropped. Also, the similarity of the corresponding translations of each selected pair is calculated. These two sets of similarity scores are then plotted in a scatter plot, as in Figure 4. If the monolingual consistency assumption holds, the plotted points would appear nearby the diagonal. Figure 4 confirms this, indicating that sentence pairs with high affinity in one language do have their counterparts with similarly high affinity in the other language. 3.3 Impact of Initial Alignment The 1-1 initial alignment plays the role of labeled instances for the semisupervised learning. It is of critical importance to the learning performance. As shown in Table 1, our alignment function predicts 1451 1-1 pairings by virtue of anchor strings, among which 1354 pairings are correct, yielding a relatively high precision in the non-monotonic circumstance. It also predicts null alignment for many sentences that contain no anchor. This explains why it outputs 662 1-0 pairings when there 20 % 30 % 40 % 50 % 60 % 70 % 80 % 90 % 100% 0 200 400 600 800 1000 1200 1400 1600 Percentage of initial 1−1 alignment Correctly detected 1−1 pairings NonmoAlign initAlign Figure 5: Performance of non-monotonic alignment along the percentage of initial 1-1 alignment. are only 70 1-0 true ones. Starting from this initial alignment, our aligner (let us call it NonmoAlign) discovers 179 more 1-1 pairings. A question here is concerned with how the scale of initial alignment affects the final alignment. To examine this, we randomly select 20%, 40%, 60% and 80% of the 1451 1-1 detected pairings as the initial alignments for a series of experiments. The random selection for each proportion is performed ten times and their average alignment performance is taken as the final result and plotted in Figure 5. An observation from this figure is that the aligner consistently discovers significantly more 1-1 pairings on top of an initial 1-1 alignment, which has to be accounted for by the monolingual consistency. Another observation is that the alignment performance goes up along the increase of the percentage of initial alignment while performance gain slows down gradually. When the percentage is very low, the aligner still works quite effectively. 3.4 Non-Monotonic Alignment To test our aligner with non-monotonic sequences of sentences, we have them randomly scrambled in our experimental data. This undoubtedly increases the difficulty of sentence alignment, especially for the traditional approaches critically relying on monotonicity. The baseline methods used for comparison are Moore’s aligner (Moore, 2002) and Hunalign (Varga et al., 2005). Hunalign is configured with the option [-realign], which triggers a three-step procedure: after an initial alignment, Hunalign heuristically enriches its dictionary using word cooccurrences in identified sentence pairs; then, it re-runs the alignment process using the updated 627 Type Moore Hunalign NonmoAlign P R F1 P R F1 P R F1 1-1 0.104 0.104 0.104 0.407 0.229 0.293 0.878 0.878 0.878 1-0 0.288 0.243 0.264 0.033 0.671 0.062 0.714 0.714 0.714 Micro 0.110 0.110 0.110 0.184 0.246 0.210 0.871 0.871 0.871 Table 2: Performance comparison with the baseline methods. dictionary. According to Varga et al (2005), this setting gives a higher alignment quality than otherwise. In addition, Hunalign can use an external bilexicon. For a fair comparison, the identified anchor set is fed to Hunalign as a special bilexicon. The performance of alignment is measured by precision (P), recall (R) and F-measure (F1). Microaveraged performance scores of precision, recall and F-measure are also computed to measure the overall performance on 1-1 and 1-0 alignment. The final results are presented in Table 2, showing that both Moore’s aligner and Hunalign underperform ours on non-monotonic alignment. The particularly poor performance of Moore’s aligner has to be accounted for by its requirement of more than thousands of sentences in bitext input for reliable estimation of its parameters. Unfortunately, our available data has not reached that scale yet. 3.5 Partially Non-Monotonic Alignment Full non-monotonic bitexts are rare in practice. But partial non-monotonic ones are not. Unlike traditional alignment approaches, ours does not found its performance on the degree of monotonicity. To test this, we construct five new versions of the data set for a series of experiments by randomly choosing and scrambling 0%, 10%, 20%, 40%, 60% and 80% sentence parings. In theory, partial non-monotonicity of various degrees should have no impact on the performance of our aligner. It is thus not surprised that it achieves the same result as reported in last subsection. NonmoAlign initialized with Hunalign (marked as NonmoAlign Hun) is also tested. The experimental results are presented in Figure 6. It shows that both Moore’s aligner and Hunalign work relatively well on bitexts with a low degree of nonmonotonicity, but their performance drops dramatically when the non-monotonicity is increased. Despite the improvement at low non-monotonicity by seeding our aligner with Hunalign, its performance decreases likewise when the degree of non-monotonicity increases, due to the quality de0 % 10% 20% 30% 40% 50% 60% 70% 80% 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 Non−monotonic ratio Micro−F1 NonmoAlign Hunalign Moore NonmoAlign_Hun Figure 6: Performance of alignment approaches at different degrees of non-monotonicity. crease of the initial alignment by Hunalign. 3.6 Monotonic Alignment The proposed alignment approach is also expected to work well on monotonic sentence alignment. An evaluation is conducted for this using a monotonic data set constructed from our data set by discarding all its 126 crossed pairings. Of the two strategies discussed above, banding is used to help our aligner incorporate the sequence information. The initial relation matrix is built with the aid of a dictionary automatically derived by Hunalign. Entries of the matrix are derived by employing a similar strategy as in Varga et al. (2005). The evaluation results are presented in Table 3, which shows that NonmoAlign still achieves very competitive performance on monotonic sentence alignment. 4 Related Work The research of sentence alignment originates in the early 1990s. Gale and Church (1991) and Brown (1991) report the early works using length statistics of bilingual sentences. The general idea is that the closer two sentences are in length, the more likely they are to align. A notable difference of their methods is that the former uses sentence 628 Type Moore Hunalign NonmoAlign P R F1 P R F1 P R F1 1-1 0.827 0.828 0.827 0.999 0.972 0.986 0.987 0.987 0.987 1-0 0.359 0.329 0.343 0.330 0.457 0.383 0.729 0.729 0.729 Micro 0.809 0.807 0.808 0.961 0.951 0.956 0.976 0.976 0.976 Table 3: Performance of monotonic alignment in comparison with the baseline methods. length in number of characters while the latter in number of tokens. Both use dynamic programming to search for the best alignment. As shown in Chen (1993) and Wu (1994), however, sentencelength based methods suffer when the texts to be aligned contain small passages, or the languages involved share few cognates. The subsequent stage of sentence alignment research is accompanied by the advent of a handful of well-designed alignment tools. Moore (2002) proposes a three-pass procedure to find final alignment. Its bitext input is initially aligned based on sentence length. This step generates a set of strictly-selected sentence pairs for use to train an IBM translation model 1 (Brown et al., 1993). Its final step realigns the bitext using both sentence length and the discovered word correspondences. Hunalign (Varga et al., 2005), originally proposed as an ingredient for building parallel corpora, has demonstrated an outstanding performance on sentence alignment. Like many other aligners, it employs a similar strategy of combining sentence length and lexical data. In the absence of a lexicon, it first performs an initial alignment wholly relying on sentence length and then automatically builds a lexicon based on this alignment. Using an available lexicon, it produces a rough translation of the source text by converting each token to the one of its possible counterparts that has the highest frequency in the target corpus. Then, the relation matrix of a bitext is built of similarity scores for the rough translation and the actual translation at sentence level. The similarity of two sentences is calculated in terms of their common pairs and length ratio. To deal with noisy input, Ma (2006) proposes a lexicon-based sentence aligner - Champollion. Its distinctive feature is that it assigns different weights to words in terms of their tf-idf scores, assuming that words with low sentence frequencies in a text but high occurrences in some local sentences are more indicative of alignment. Under this assumption, the similarity of any two sentences is calculated accordingly and then a dynamic programming algorithm is applied to produce final alignment. Following this work, Li et al. (2010) propose a revised version of Champollion, attempting to improve its speed without performance loss. For this purpose, the input bitexts are first divided into smaller aligned fragments before applying Champollion to derive finer-grained sentence pairs. In another related work by Deng et al. (2007), a generative model is proposed, accompanied by two specific alignment strategies, i.e., dynamic programming and divisive clustering. Although a non-monotonic search process that tolerates two successive chunks in reverse order is involved, their work is essentially targeted at monotonic alignment. 5 Conclusion In this paper we have proposed and tested a semisupervised learning approach to nonmonotonic sentence alignment by incorporating both monolingual and bilingual consistency. The utility of monolingual consistency in maintaining the consonance of high-affinity monolingual sentences with their translations has been demonstrated. This work also exhibits that bilingual consistency of initial alignment of certain quality is useful to boost alignment performance. Our evaluation using real-world data from a legislation corpus shows that the proposed approach outperforms the baseline methods significantly when the bitext input is composed of non-monotonic sentences. Working on partially non-monotonic data, this approach also demonstrates a superior performance. Although initially proposed for nonmonotonic alignment, it works well on monotonic alignment by incorporating the constraint of sentence sequence. Acknowledgments The research described in this paper was substantially supported by the Research Grants Council (RGC) of Hong Kong SAR, China, through the GRF grant 9041597 (CityU 144410). 629 References Ricardo Baeza-Yates and Berthier Ribeiro-Neto. 2011. Modern Information Retrieval: The Concepts and Technology Behind Search, 2nd ed., Harlow: Addison-Wesley. Jewel B. Barlow, Moghen M. Monahemi, and Dianne P. O’Leary. 1992. Constrained matrix Sylvester equations. In SIAM Journal on Matrix Analysis and Applications, 13(1):1-9. Peter F. Brown, Jennifer C. Lai, Robert L. Mercer. 1991. Aligning sentences in parallel corpora. In Proceedings of ACL’91, pages 169-176. Peter F. Brown, Vincent J. Della Pietra, Stephen A. Della Pietra and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263311. Stanley F. Chen. 1993. Aligning sentences in bilingual corpora using lexical information. In Proceedings of ACL’93, pages 9-16. Yonggang Deng, Shankar Kumar, and William Byrne. 2007. Segmentation and alignment of parallel text for statistical machine translation. Natural Language Engineering, 13(3): 235-260. William A. Gale, Kenneth Ward Church. 1991. A Program for aligning sentences in bilingual corpora. In Proceedings of ACL’91, pages 177-184. Martin Kay and Martin R¨oscheisen. 1993. Texttranslation alignment. Computational Linguistics, 19(1):121-142. Chunyu Kit, Jonathan J. Webster, King Kui Sin, Haihua Pan, and Heng Li. 2004. Clause alignment for bilingual HK legal texts: A lexical-based approach. International Journal of Corpus Linguistics, 9(1):2951. Chunyu Kit, Xiaoyue Liu, King Kui Sin, and Jonathan J. Webster. 2005. Harvesting the bitexts of the laws of Hong Kong from the Web. In The 5th Workshop on Asian Language Resources, pages 71-78. Judith L. Klavans and Evelyne Tzoukermann. 1990. The bicord system: Combining lexical information from bilingual corpora and machine readable dictionaries. In Proceedings of COLING’90, pages 174179. Philippe Langlais, Michel Simard, and Jean V´eronis. 1998. Methods and practical issues in evaluating alignment techniques. In Proceedings of COLINGACL’98, pages 711-717. Zhanyi Liu, Haifeng Wang, Hua Wu, and Sheng Li. 2010. Improving statistical machine translation with monolingual collocation. In Proceedings of ACL 2010, pages 825-833. Xiaoyi Ma. 2006. Champollion: A robust parallel text sentence aligner. In LREC 2006, pages 489-492. Peng Li, Maosong Sun, Ping Xue. 2010. FastChampollion: a fast and robust sentence alignment algorithm. In Proceedings of ACL 2010: Posters, pages 710-718. Robert C. Moore. 2002. Fast and accurate sentence alignment of bilingual corpora. In Proceedings of AMTA 2002, page 135-144. Jian-Yun Nie, Michel Simard, Pierre Isabelle and Richard Durand. 1999. Cross-language information retrieval based on parallel texts and automatic mining of parallel texts from the Web. In Proceedings of SIGIR’99, pages 74-81. Martin F. Porter. 1980. An algorithm for suffix stripping. Program, 14(3): 130-137. Jinsong Su, Hua Wu, Haifeng Wang, Yidong Chen, Xiaodong Shi, Huailin Dong, Qun Liu. 2012. Translation model adaptation for statistical machine translation with monolingual topic information. In Proceedings of ACL 2012, Vol. 1, pages 459-468. Ben Taskar, Simon Lacoste-Julien and Dan Klein. 2005. A discriminative matching approach to word alignment. In Proceedings of HLT/EMNLP 2005, pages 73-80. D´aniel Varga, P´eter Hal´acsy, Andr´as Kornai, Viktor Nagy, L´aszl´o N´emeth, Viktor Tr´on. 2005. Parallel corpora for medium density languages. In Proceedings of RANLP 2005, pages 590-596. Dekai Wu. 1994. Aligning a parallel English-Chinese corpus statistically with lexical criteria. In Proceedings of ACL’94, pages 80-87. Dekai Wu. 2010. Alignment. Handbook of Natural Language Processing, 2nd ed., CRC Press. Dengyong Zhou, Olivier Bousquet, Thomas N. Lal, Jason Weston, Bernhard Schlkopf. 2004. Learning with local and global consistency. Advances in Neural Information Processing Systems, 16:321-328. Xiaojin Zhu, Zoubin Ghahramani and John Lafferty. 2003. Semi-supervised learning using Gaussian fields and harmonic functions. In Proceedings of ICML 2003, pages 912-919. 630
2013
61
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 631–640, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Bootstrapping Entity Translation on Weakly Comparable Corpora Taesung Lee and Seung-won Hwang Department of Computer Science and Engineering Pohang University of Science and Technology (POSTECH) Pohang, Republic of Korea {elca4u, swhwang}@postech.edu Abstract This paper studies the problem of mining named entity translations from comparable corpora with some “asymmetry”. Unlike the previous approaches relying on the “symmetry” found in parallel corpora, the proposed method is tolerant to asymmetry often found in comparable corpora, by distinguishing different semantics of relations of entity pairs to selectively propagate seed entity translations on weakly comparable corpora. Our experimental results on English-Chinese corpora show that our selective propagation approach outperforms the previous approaches in named entity translation in terms of the mean reciprocal rank by up to 0.16 for organization names, and 0.14 in a low comparability case. 1 Introduction Identifying and understanding entities is a crucial step in understanding text. This task is more challenging in the presence of multilingual text, because translating named entities (NEs), such as persons, locations, or organizations, is a non-trivial task. Early research on NE translation used phonetic similarities, for example, to mine the translation ‘Mandelson’→‘曼德尔 森’[ManDeErSen] with similar sounds. However, not all NE translations are based on transliterations, as shown in Table 1—Some translations, especially the names of most organizations, are based on semantic equivalences. Furthermore, names can be abbreviated in one or both languages, e.g., the ‘World Trade Organization’ (世 界贸易组织) can be called the ‘WTO’ (世贸组 织). Another challenging example is that, a translation can be arbitrary, e.g., ‘Jackie Chan’ →‘成 龙’ [ChengLong]. There are many approaches English Chinese World Trade Organization 世界贸易组织 [ShiJieMaoYiZuZhi] WTO 世贸组织[ShiMaoZuZhi] Jackie Chan 成龙[ChengLong] Table 1: Examples of non-phonetic translations. that deal with some of these challenges (Lam et al., 2007; Yang et al., 2009), e.g., by combining phonetic similarity and a dictionary. However, arbitrary translations still cannot be handled by examining the NE pair itself. Corpus-based approaches (Kupiec, 1993; Feng, 2004), by mining external signals from a large corpus, such as parenthetical translation “成龙(Jackie Chan)”, complement the problem of transliteration-based approaches, but the coverage of this approach is limited to popular entities with such evidence. The most effective known approach to NE translation has been a holistic framework (You et al., 2010; Kim et al., 2011; You et al., 2012) combining transliteration- and corpus-based methods. In these approaches, both 1) arbitrary translations and 2) lesser-known entities can be handled, by propagating the translation scores of known entities to lesser-known entities if they co-occur frequently in both corpora. For example, a lesserknown entity Tom Watson can be translated if Mandelson and Tom Watson co-occur frequently in an English corpus, and their Chinese translations also co-occur frequently in a Chinese corpus, i.e., if the co-occurrences in the two corpora are “symmetric”. A research question we ask in this paper is: What if comparable corpora are not comparable enough to support this symmetry assumption? We found that this is indeed the case. For example, even English and Chinese news from the same publisher may have different focus– the Chinese version focuses more on Chinese Olympic 631 teams and Chinese local news. In the presence of such asymmetry, all previous approaches, building upon symmetry, quickly deteriorate by propagating false positives. For example, co-occurrence of Mandelson and Tom Watson may not appear in a Chinese corpus, which may lead to the translation of Tom Watson into another Chinese entity Gordon Brown which happens to co-occur with the Chinese translation of Mandelson. Our key contribution is to avoid such false propagation, by discerning the semantics of relations. For example, relations between Mandelson and Tom Watson, should be semantically different from Chinese relations between ‘戈登·布朗’ (Gordon Brown) and ‘曼德尔森’ (Mandelson). A naive approach would be finding documents with a similar topic such as politics, and scientific discovery, and allowing propagation only when the topic agrees. However, we found that a topic is a unit that is too coarse for this task because most articles on Mandelson will invariably fall into the same topic1. In clear contrast, we selectively propagate seed translations, only when the relations in the two corpora share the same semantics. This selective propagation can be especially effective for translating challenging types of entities such as organizations including the WTO used with and without abbreviation in both languages. Applying a holistic approach (You et al., 2012) on organizations leads to poor results, 0.06 in terms of the F1-score. A naive approach to increase the precision would be to consider multitype co-occurrences, hoping that highly precise translations of some type, e.g., persons with an F1-score of 0.69 (You et al., 2012), can be propagated to boost the precision on organizations. In our experiments, this naive multi-type propagation still leads to an unsatisfactory F1-score of 0.12. Such a low score can be explained by the following example. When translating ‘WTO’ using the co-occurrence with ‘Mandelson’, other co-occurrences such as (London, Mandelson) and (EU, Mandelson) produce a lot of noise because the right translation of WTO does not share much phonetic/semantic similarity. Our understanding of relation semantics, can distinguish “Mandelson was born in London” from “Mandelson visited the WTO”, to stop false propagations, which generates an F1-score 0.25 higher than the existing ap1The MRR for organization names achieved by a topic model-based approach was 0.15 lower than our best. proaches. More formally, we enable selective propagation of seed translations on weakly comparable corpora, by 1) clarifying the detailed meaning of relational information of co-occurring entities, and 2) identifying the contexts of the relational information using statement-level context comparison. In other words, we propagate the translation score of a known translation pair to a neighbor pair if the semantics of their relations in English and Chinese corpora are equivalent to accurately propagate the scores. For example, if we know ‘Russia’→‘俄罗 斯’(1) and join→加入(2), then from a pair of statements “Russia(1) joins(2) the WTO(3)” and “俄罗斯(1) 加入(2) 世贸组织(3)”, we can propagate the translation score of (Russia, 俄罗斯)(1) to (WTO, 世 贸组织)(3). However, we do not exploit a pair of statements “Russia joined the WTO” and “俄罗斯 谴责(2’) 摩洛哥” because 谴责(2’) does not mean join(2). Furthermore, we mine a similar EnglishChinese document pair that can be found by comparing the entity relationships, such as “Mandelson visited Moscow” and “Mandelson met Alexei Kudrin”, within the English document and the Chinese document to leverage similar contexts to assure that we use symmetric parts. For this goal, we first extract relations among entities in documents, such as visit and join, and mine semantically equivalent relations across the languages, e.g., English and Chinese, such as join→加入. Once these relation translations are mined, similar document pairs can be identified by comparing each constituent relationship among entities using their relations. Knowing document similarity improves NE translation, and improved NE translation can boost the accuracy of document and relationship similarity. This iterative process can continue until convergence. To the best of our knowledge, our approach is the first to translate a broad range of multilingual relations and exploit them to enhance NE translation. In particular, our approach leverages semantically similar document pairs to exclude incomparable parts that appear in one language only. Our method outperforms the previous approaches in translating NE up to 0.16 in terms of the mean reciprocal rank (MRR) for organization names. Moreover, our method shows robustness, with 0.14 higher MRR than seed translations, on less comparable corpora. 632 2 Related Work This work is related to two research streams: NE translation and semantically equivalent relation mining. Entity translation Existing approaches on NE translation can be categorized into 1) transliteration-based, 2) corpusbased, and 3) hybrid approaches. Transliteration-based approaches (Wan and Verspoor, 1998; Knight and Graehl, 1998) are the foundations of many decent methods, but they alone suffer from ambiguity (e.g., 史蒂夫and 始第夫have the same sounds) and cannot handle non-transliterated cases such as ‘Jackie Chan (成龙[ChengLong])’. Some methods (Lam et al., 2007; Yang et al., 2009) rely on meanings of constituent letters or words to handle organization name translation such as ‘Bank of China (中国 银行)’, whose translation is derived from ‘China (中国)’, and ‘a bank (银行)’. However, many names often originate from abbreviation (such as ‘WTO’); hence we cannot always leverage meanings. Corpus-based approaches (Kupiec, 1993; Lin et al., 2008; Jiang et al., 2009) exploit high-quality bilingual evidence such as parenthetical translation, e.g., “成龙(Jackie Chan)”, (Lin et al., 2008), semi-structural patterns (Jiang et al., 2009), and parallel corpus (Kupiec, 1993). However, the coverage of the corpus-based approaches is limited to popular entities with such bilingual evidences. On the other hand, our method can cover entities with monolingual occurrences in corpora, which significantly improves the coverage. The most effective known approach is a holistic framework that combines those two approaches (You et al., 2012; You et al., 2010; Kim et al., 2011). You et al. (2010; 2012) leverage two graphs of entities in each language, that are generated from a pair of corpora, with edge weights quantified as the strength of the relatedness of entities. Then, two graphs are iteratively aligned using the common neighbors of two entities. Kim et al. (2011) build such graphs using the context similarity, measured with a bag of words approach, of entities in news corpora to translate NEs. However, these approaches assume the symmetry of the two graphs. This assumption holds if two corpora are parallel, but such resources are scarce. But our approach exploits comparable parts from corpora. 0 0.1 0.2 0.3 0 10 20 30 40 50 Normalized Occurrence Weeks WTO 荃䍨㓴㓷 Figure 1: Dissimilarity of temporal distributions of ‘WTO’ in English and Chinese corpora. Other interesting approaches such as (Klementiev and Roth, 2006; Kim et al., 2012) rely on temporal distributions of entities. That is, two entities are considered to be similar if the two entities in different languages have similar occurrence distributions over time. However, the effectiveness of this feature also depends on the comparability of entity occurrences in time-stamped corpora, which may not hold as shown in Figure 1. In clear contrast, our method can find and compare articles, on different dates, describing the same NE. Moreover, our method does not require time stamps. Semantically similar relation mining Recently, similar relation mining in one language has been studied actively as a key part of automatic knowledge base construction. In automatically constructed knowledge bases, finding semantically similar relations can improve understanding of the Web describing content with many different expressions. As such an effort, PATTY (Nakashole et al., 2012) finds similar relations with almost the same support sets–the sets of NE pairs that co-occur with the relations. However, because of the regional locality of information, bilingual corpora contain many NE pairs that appear in only one of the support sets of the semantically identical relations. NELL (Mohamed et al., 2011) finds related relations using seed pairs of one given relation; then, using K-means clustering, it finds relations that are semantically similar to the given relation. Unfortunately, this method requires that we set K manually, and extract relations for each given relation. Therefore, this is unsuitable to support general relations. There are only few works on translating relations or obtaining multi-lingual similar relations. Schone et al. (2011) try to find relation patterns 633 in multiple languages for given seed pairs of a relation. Because this approach finds seed pairs in Wikipedia infoboxes, the number of retrievable relations is restricted to five. Kim et al. (2010) seek more diverse types of relations, but it requires parallel corpora, which are scarce. 3 Framework Overview In this section, we provide an overview of our framework for translating NEs, using news corpora in English and Chinese as a running example. Because such corpora contain asymmetric parts, the goal of our framework is to overcome asymmetry by distinguishing the semantics of relations, and leveraging document context defined by the relations of entities. (e) Iteration on ܶே ܶே ௧ାଵ՚ ܶே ௧ (Section 4.5) (c) Relation Translation ܶோ (Section 4.3) (d) Statement-Level Document Context Comparison ܶ஽ (Section 4.4) (b) Seed Entity Translation ܶே ଵ (Section 4.2) Iterative process English Corpus Chinese Corpus (a) Statement Extraction (Section 4.1) Figure 2: Framework overview. For this purpose, we build a mutual bootstrapping framework (Figure 2), between entity translation and relation translation using extracted relationships of entities (Figure 2 (a), Section 4.1). More formally, we use the following process: 1. Base condition (Figure 2 (a), Section 4.2): Initializing T (1) N (eE, eC), a seed entity translation score, where eE is an English entity, and eC is a Chinese entity. T (1) N can be initialized by phonetic similarity or other NE translation methods. 2. Iteration: Obtaining T t+1 N using T t N. 1) Using T t N, we obtain a set of relation translations with a semantic similarity score, T t R(rE, rC), for an English relation rE and a Chinese relation rC (Figure 2 (b), Section 4.3) (e.g., rE =visit and rC =访问). 2) Using T t N and T t R, we identify a set of semantically similar document pairs that describe the same event with a similarity score T t D(dE, dC) where dE is an English document and dC is a Chinese document (Figure 2 (c), Section 4.4). 3) Using T t N, T t R and T t D, we compute T t+1 N , an improved entity translation score (Figure 2 (d), Section 4.5). Each sub-goal reinforces the result of others in the (t + 1)-th iteration, and by iteratively running them, we can improve the quality of translations. Note that, hereinafter, we omit (t) for readability when there is no ambiguity. 4 Methods In this section, we describe our method in detail. First, we explain how we extract statements, which are units of relational information, from documents in Section 4.1, and how we obtain seed name translations in Section 4.2. Next, we present our method for discovering relation translations across languages in Section 4.3. In Section 4.4, we use the name translations and the relation translations to compare document contexts which can boost the precision of NE translation. In Section 4.5, we describe how we use the resources obtained so far to improve NE translation. 4.1 Statement Extraction We extract relational statements, which we exploit to propagate translation scores, from an English news corpus and a Chinese news corpus. A relational statement, or simply a statement is a triple (x, r, y), representing a relationship between two names, x and y. For example, from “Mandelson recently visited Moscow,” we obtain this statement: (Mandelson, visit, Moscow). We follow a standard procedure to extract statements, as similarly adopted by Nakashole et al. (2012), using Stanford CoreNLP (Klein and Manning, 2003) to lemmatize and parse sentences. Here, we refer readers to existing work for further details because this is not our key contribution. 4.2 Seed Entity Translation We need a few seed translation pairs to initiate the framework. We build a seed translation score T (1) N (eE, eC) indicating the similarity of an English entity eE and a Chinese entity eC using an existing method. For example, most methods would give high value for 634 T (1) N (Mandelson, 曼德尔森[ManDeErSen]). In this work, we adopted (You et al., 2012) with (Lam et al., 2007) as a base translation matrix to build the seed translation function. We also use a dictionary to obtain non-NE translations such as ‘government’. We use an English-Chinese general word dictionary containing approximately 80,000 English-Chinese translation word pairs that was also used by Kim et al. (2011) to measure the similarity of context words of entities. 4.3 Relation Translation We need to identify relations that have the equivalent semantics across languages, (e.g., visit→访 问), to enable selective propagation of translation scores. Formally, our goal is to measure a pairwise relation translation score TR(rE, rC) for an English relation rE ∈RE and a Chinese relation rC ∈RC where RE is a set of all English relations and RC is a set of all Chinese relations. We first explain a basic feature to measure the similarity of two relations, its limitations, and how we address the problems. A basic clue is that relations of the same meaning are likely to be mentioned with the same entity pairs. For example, if we have (Mandelson, visit, Moscow) as well as (Mandelson, head to, Moscow) in the corpus, this is a positive signal that the two relations may share the same meaning. Such NE pairs are called support pairs of the two relations. We formally define this clue for relations in the same language, and then describe that in the bilingual setting. A support intersection Hm(ri, rj), a set of support pairs, for monolingual relations ri and rj is defined as Hm(ri, rj) = H(ri) ∩H(rj) (1) where H(r) is the support set of a relation r defined as H(r) = {(x, y)|(x, r, y) ∈S}, and S is either SE, a set of all English statements, or SC, a set of all Chinese statements that we extracted in Section 4.1. Likewise, we can define a support intersection for relations in the different languages using the translation score TN(eE, eC). For an English relation rE and a Chinese relation rC, Hb(rE, rC) ={(xE, xC, yE, yC)| TN(xE, xC) ≥θ and TN(yE, yC) ≥θ for (xE, rE, yE) ∈SE and (xC, rC, yC) ∈SC} (2) where θ = 0.6 is a harsh threshold to exclude most of the false translations by TN. Finally, we define a support intersection, a set of support pairs between two relations ri and rj of any languages, H(ri, rj) =      Hb(ri, rj) if ri ∈RE and rj ∈RC Hb(rj, ri) if rj ∈RE and ri ∈RC Hm(ri, rj) otherwise (3) Intuitively, |H(ri, rj)| indicates the strength of the semantic similarity of two relations ri and rj of any languages. However, as shown in Table 2, we cannot use this value directly to measure the similarity because the support intersection of semantically similar bilingual relations (e.g., |H(head to, 访问)| = 2) is generally very low, and normalization cannot remedy this problem as we can see from |H(visit, 访问)| = 27 and |H(visit)| = 1617. Set Cardinality H(visit) 1617 H(访问) 2788 H(visit, 访问) 27 H(head to, 访问) 2 Table 2: Evidence cardinality in the corpora. 䇯䰞 visit head to call on denounce criticize blame ask request appeal to fly to 8 2 1 1 2 4 Figure 3: Network of relations. Edges indicate that the relations have a non-empty support intersection, and edge labels show the size of the intersection. We found that the connectivity among similar relations is more important than the strength of the similarity. For example, as shown in Figure 3, visit is connected to most of the visit-relations such as head to, 访问. Although visit is connected to criticize, visit is not connected to other criticize-relations such as denounce and blame, whereas criticize, denounce, and blame are inter635 䇯䰞 visit head to visit-cluster denounce criticize blame criticize-cluster call on 10 6 fly to 2 call on ask request appeal to request-cluster Figure 4: Relation clusters and a few individual relations. Edge labels show the size of the intersection. connected. To exploit this feature, we use a random walk-based graph clustering method. Formally, we use Markov clustering (Van Dongen, 2000) on a graph G = (V, E) of relations, where V = RE ∪RC is a set of all English and Chinese relations. An edge (ri, rj) indicates that two relations in any languages are similar, and its weight is quantified by a sigmoid function on a linear transformation of |H(ri, rj)| that was empirically found to produce good results. Each resultant cluster forms a set of bilingual similar relations, c = {rc1, ..., rcM }, such as visitcluster, which consists of visit, head to, and 访问 in Figure 4. However, this cluster may not contain all similar relations. A relation may have multiple meanings (e.g., call on) so it can be clustered to another cluster, or a relation might not be clustered when its support set is too small (e.g., fly to). For such relations, rather than assigning zero similarity to visit-relations, we compute a cluster membership function based on support pairs of the cluster members and the target relation, and then formulate a pairwise relation translation score. Formally, we learn the membership function of a relation r to a cluster c using support vector regression (Joachims, 1999) with the following features based on the support set of cluster c, H(c) = ∪ r∈c H(r), and the support intersection of r and c, H(r, c) = ∪ r∗∈c H(r, r∗). • f1(r, c) = |H(r, c)|/|H(r)|: This quantifies the degree of inclusion, H(c) ∈H(r). • f2(r, c) = |H(r, c)|/|H(c)|: This quantifies the degree of inclusion, H(r) ∈H(c). • f3(r, c) = |Hwithin(r, c)|/|Hwithin(c)|: This is a variation of f2 that considers only noun phrase pairs shared at least once by relations in c. • f4(r, c) = |Hwithin(r, c)|/|Hshared(c)|: This is a variation of f2 that considers only noun phrase pairs shared at least once by any pair of relations. • f5(r, c) = |{r∗∈c|H(r, r∗) > 0}|/|c|: This is the degree of connectivity to the cluster members. where Hwithin(r, c) = ∪ r∗∈c H(r, c) ∩H(r, r∗), the intersection, considering translation, of H(r) and noun phrase pairs shared at once by relations in c, Hwithin(c) = ∪ r∗∈c H(r∗, c −{r∗}), and Hshared(c) = ∪ r∗∈RE∪RC H(r∗, c), the noun phrase pairs shared at once by any relations. The use of Hwithin and Hshared is based on the observation that a noun phrase pair that appear in only one relation tends to be an incorrectly chunked entity such as ‘World Trade’ from the ‘World Trade Organization’. Based on this membership function S(r, c), we compute pairwise relation similarity. We consider that two relations are similar if they have at least one cluster that the both relations belong to, which can be measured with S(r, c). More formally, pairwise similarity of relations ri and rj is defined as TR(ri, rj) = max c∈C S(ri, c) · S(rj, c) (4) where C is a set of all clusters. 4.4 Statement-level Document Context Comparison A brute-force statement matching approach often fails due to ambiguity created by ignoring context, and missing information in TN or TR. Therefore, we detect similar document pairs to boost the statement matching process. Unlike the previous approaches (e.g., bag-of-words), we focus on the relationships of entities within documents using the extracted statements. Formally, we compute the similarity of two statements sE = (xE, rE, yE) and sC = (xC, rC, yC) in different languages as follows: TS(sE, sC) = TN(xE, xC)TR(rE, rC)TN(yE, yC) (5) With this definition, we can find similar statements described with different vocabularies in different languages. To compare a document pair, we use the following equation to measure the similarity of an 636 English document di E and a Chinese document dj C based on their statements Si E and Sj C, respectively: TD(di E, dj C) = ∑ (sE,sC)∈B TS(si,r E , sj,r C ) |Si E| + |Si E| −|B| (6) where B ⊂Si E×Sj C is a greedy approximate solution of maximum bipartite matching (West, 1999) on a bipartite graph GB = (VB = (Si E, Sj C), EB) with edge weights that are defined by TS. The maximum bipartite matching finds a subset of edges in Si E × Sj C that maximize the sum of the selected edge weights and that do not share a node as their anchor point. 4.5 Iteration on TN In this section, we describe how we use the statement similarity function TS, and the document similarity function TD to improve and derive the next generation entity translation function T (t+1) N . We consider that a pair of an English entity eE and a Chinese entity eC are likely to indicate the same real world entity if they have 1) semantically similar relations to the same entity 2) under the same context. Formally, we define an increment function as follows. ∆TN(eE,eC)= ∑ di E ∑ dj C TD(di, dj) max (sE,sC)∈B∗TS(sE, sC) (7) where B∗is a subset of B ⊂Si E×Sj C such that the connected statements mention eE and eC, and B is the greedy approximate solution of maximum bipartite matching for the set Si E of statements of di E and the set Sj C of statements of dj C. In other words, B∗is a set of matching statement pairs mentioning the translation target eE and eC in the document pair. Then, we use the following equation to improve the original entity translation function. T (t+1) N (eE, eC) = (1 −λ) ∆TN(eE, eC) ∑ e∗ C ∆TN(eE, e∗ C) + λTN(eE, eC) (8) where λ is a mixing parameter in [0, 1]. We set λ = 0.6 in our experiments. With this update, we obtain the improved NE translations considering the relations that an entity has to other entities under the same context to achieve higher precision. 5 Experiments In this section, we present experimental settings and results of translating entity names using our methods compared with several baselines. 5.1 Data and Evaluation We processed news articles for an entire year in 2008 by Xinhua news who publishes news in both English and Chinese, which were also used by Kim et al. (2011) and Shao and Ng (2004). The English corpus consists of 100,746 news articles, and the Chinese corpus consists of 88,031 news articles. The news corpora are not parallel but comparable corpora, with asymmetry of entities and relationship as the asymmetry in the number of documents also suggest. Examples of such locality in Xinhua news include the more extensive coverage of Chinese teams in the Olympics and domestic sports in the Chinese news. Our framework finds and leverages comparable parts from the corpora without document-content-external information such as time stamps. We also show that, under the decreasing comparability, our method retains higher MRR than the baselines. We follow the evaluation procedures used by You et al. (2012) and Kim et al. (2011) to fairly and precisely compare the effectiveness of our methods with baselines. To measure performance, we use mean reciprocal rank (MRR) to evaluate a translation function T: MRR(T) = 1 |Q| ∑ (u,v)∈Q 1 rankT (u, v) (9) where Q is the set of gold English-Chinese translation pairs (u, v) and rankT (u, v) is the rank of T(u, v) in {T(u, w)|w is a Chinese entity}. In addition, we use precision, recall, and F1-score. As gold translation pairs, we use the evaluation data used by You et al. (2012) with additional labels, especially for organizations. The labeling task is done by randomly selecting English entities and finding their Chinese translation from the Chinese corpus. We only use entities with translations that appear in the Chinese corpus. We present the evaluation results for persons and organizations to show the robustness of the methods. In total, we identified 490 English entities in the English news with Chinese translations in the Chinese news. Among the 490 entities, 221 NEs are persons and 52 NEs are organizations. 637 Person Organization MRR P. R. F1 MRR P. R. F1 T (2) N 0.80 0.81 0.79 0.80 0.53 0.56 0.52 0.54 T (1) N 0.77 0.80 0.77 0.78 0.44 0.49 0.44 0.46 T S PH+P 0.73 0.70 0.67 0.69 0.14 0.17 0.04 0.06 T M PH+P 0.68 0.70 0.68 0.69 0.08 0.31 0.08 0.12 THB 0.71 0.59 0.59 0.59 0.37 0.29 0.29 0.29 TDict 0.09 1.00 0.09 0.17 0.17 1.00 0.17 0.30 Table 3: Evaluation results of the methods. 5.2 Baselines We compare our methods with the following baselines. • T S PH+P (You et al., 2012) is a holistic method that uses a transliteration method as base translations, and then reinforces them to achieve higher quality. This method uses only a single type of entities to propagate the translation scores. • T M PH+P is the holistic method revised to use naive multi-type propagation that uses multiple types of entities to reinforce the translation scores. • THB is a linear combination of transliteration and semantic translation methods (Lam et al., 2007) tuned to achieve the highest MRR. • TDict is a dictionary-only method. This dictionary is used by both THB and TN. Only the translation pairs of scores above 0.35 are used for TPH+P to maximize the F1-score to measure precision, recall and F1-score. For our method T (t) N , we use the result with (t) = 1, the seed translations, and (t) = 2, which means that only one pass of the whole framework is performed to improve the seed translation function. In addition, we use translation pairs with scores above 0.05 to measure precision, recall, and F1score. Note that these thresholds do not affect MRRs. 5.3 NE Translation Results We show the result of the quantitative evaluation in Table 3, where the highest values are boldfaced, except TDict which shows 1.00 precision because it is a manually created dictionary. For both the person and organization cases, our method T (2) N outperforms the state-of-the-art methods in terms English name T (2) N T (1) N THB Mandelson 曼 曼 曼德 德 德尔 尔 尔森 森 森 [ManDeErSen] 曼 曼 曼德 德 德尔 尔 尔森 森 森 [ManDeErSen] 曼 曼 曼德 德 德尔 尔 尔森 森 森 [ManDeErSen] WTO 世 世 世贸 贸 贸组 组 组织 织 织 [ShiMaoZuZhi] 上合组织 [ShangHeZuZhi] 巴解组织 [BaJieZuZhi] White House 白 白 白宫 宫 宫 [BaiGong] 加州 [JiaZhou] 加州 [JiaZhou] Microsoft 微 微 微软 软 软公 公 公司 司 司 [WeiRuanGongSi] 美国司法部 [MeiGuoSiFaBu] 米罗诺夫 [MiLuoNuoFu] Table 4: Example translations from the different methods. Boldface indicates correct translations. 0.4 0.6 0.8 D0 D1 D2 t=2 t=1 ܶே ଶ ܶே ଵ Figure 6: MRR with decreasing comparability. of precision, recall, F1-score and MRR. With only one iteration of selective propagation, the seed translation is improved to achieve the 0.09 higher MRR. The baselines show lower, but comparable MRRs and F1-scores for persons that mostly consist of transliterated cases. However, not all translations have phonetic similarity, especially organization names, as the low F1-score of T S PH+P , 0.06, for organizations suggests. The naive multitype propagation T M PH+P shows decreased MRR for both persons and organizations compared to the single-type propagation T S PH+P , which shows a negative influence of diverse relation semantics of entities of different types. THB achieves a better MRR than TPH+P due to the semantic translation of organization names. However, despite the increased recall of THB over that of TDict, the precision of THB is unsatisfactory because THB maps abbreviated names such as ‘WTO’ with other NEs. On the other hand, our method achieves the highest MRR and precision in both the person and organization categories. As shown in Table 4, THB translates ‘WTO’ inaccurately, linking it to an incorrect organization ‘巴解组织’ (Palestine Liberation Organization). 638 The European Union (EU) Trade Commissioner (1) Peter Mandelson traveled to Moscow on Thursday for talks on … Mandelson said it is a priority to see (2) Russia join the WTO, … ⅗ⴏ䍨᱃ငઈ (1) ᖬᗇᴬᗧቄ἞14ᰕ੟〻ࡽᖰ㧛ᯟ、, …ᗧቄ἞൘㹼ࡽਁ㺘Ⲵ༠᰾ѝ䈤, (2) ״㖇ᯟ࣐ޕц䍨㓴㓷ᱟ⅗ⴏՈݸ㘳㲁Ⲵһ亩ѻа, … (Peter Mandelson, traveled to, Moscow) (ᖬᗇᴬᗧቄ἞, ੟〻ࡽᖰ, 㧛ᯟ、) (Russia, join, WTO) (״㖇ᯟ, ࣐ޕ, ц䍨㓴㓷) 1) 2) Figure 5: Example of similar document pairs. Moreover, the use of the corpora by T (1) N could not fix this problem, and it finds another organization related to trade, ‘上合组织’ (Shanghai Cooperation Organization). In contrast, our selective propagation method T (2) N , which uses the wrong seed translation by T (1) N , ‘上合组织’ (Shanghai Cooperation Organization), successfully translates the WTO using statements such as (Russia, join, WTO), and its corresponding Chinese statement (俄罗斯, 加入, 世贸组织). Similarly, both the baseline THB and the seed translation T (1) N matched Microsoft to incorrect Chinese entities that are phonetically similar as indicated by the underlined text. In contrast, T (2) N finds the correct translation despite the phonetic dissimilarity. 5.4 NE Translation Results with Low Corpus Comparability We tested the methods using less comparable data to evaluate the robustness with the following derived datasets: • D0: All news articles are used. • D1: January-December English and JulyDecember Chinese articles are used. • D2: April-September English and JulyDecember Chinese articles are used. Figure 6 shows the MRR comparisons of our method T (2) N and T (1) N on all test entities. Because the commonly appearing NEs are decreasing, the performance decline is inevitable. However, we can see that the MRR of the seed translation method drops significantly on D1 and D2, whereas our method shows 0.14 higher MRR for both cases. 5.5 Similar Documents In this section, we show an example of similar documents in Figure 5. Both articles describe the same event about the visit of Mandelson to Moscow for the discussion on the joining of Russia to the WTO. The extracted statements are the exact translations of each corresponding part as indicated by the arrows. We stress this is an extreme case for illustration, where the two sentences are almost an exact translation, except for a minor asymmetry involving the date (Thursday in English, and 14th in Chinese). In most similar documents, the asymmetry is more significant. The seed translation score T 1 N(WTO, 世贸组织) is not enough to match the entities. However, the context similarity, due to other similar statements such as (1), allows us to match (2). This match helps translation of ‘WTO’ by inspecting the organization that Russia considers to join in both documents. 6 Conclusions This paper proposed a bootstrapping approach for entity translation using multilingual relational clustering. Further, the proposed method could finds similar document pairs by comparing statements to enable us to focus on comparable parts of evidence. We validated the quality of our approach using real-life English and Chinese corpora, and its performance significantly exceeds that of previous approaches. Acknowledgment This research was supported by the MKE (The Ministry of Knowledge Economy), Korea and Microsoft Research, under IT/SW Creative research program supervised by the NIPA (National IT Industry Promotion Agency). (NIPA-2012-H050312-1036). 639 References Donghui Feng. 2004. A new approach for englishchinese named entity alignment. In Proceedings of the Conference on Empirical Methods in Natural Language Processing EMNLP, pages 372–379. Long Jiang, Shiquan Yang, Ming Zhou, Xiaohua Liu, and Qingsheng Zhu. 2009. Mining bilingual data from the web with adaptively learnt patterns. In Joint Conference of the ACL and the IJCNLP, pages 870–878, Stroudsburg, PA, USA. T. Joachims. 1999. Making large-scale SVM learning practical. In B. Sch¨olkopf, C. Burges, and A. Smola, editors, Advances in Kernel Methods - Support Vector Learning. MIT Press, Cambridge, MA, USA. Seokhwan Kim, Minwoo Jeong, Jonghoon Lee, and Gary Geunbae Lee. 2010. A cross-lingual annotation projection approach for relation detection. In COLING, pages 564–571, Stroudsburg, PA, USA. Jinhan Kim, Long Jiang, Seung-won Hwang, Young-In Song, and Ming Zhou. 2011. Mining entity translations from comparable corpora: a holistic graph mapping approach. In CIKM, pages 1295–1304, New York, NY, USA. Jinhan Kim, Seung won Hwang, Long Jiang, YoungIn Song, and Ming Zhou. 2012. Entity translation mining from comparable corpora: Combining graph mapping with corpus latent features. IEEE Transactions on Knowledge and Data Engineering, 99(PrePrints). Dan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1, ACL ’03, pages 423– 430, Stroudsburg, PA, USA. Association for Computational Linguistics. Alexandre Klementiev and Dan Roth. 2006. Named entity transliteration and discovery from multilingual comparable corpora. In Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, HLTNAACL ’06, pages 82–88, Stroudsburg, PA, USA. Association for Computational Linguistics. Kevin Knight and Jonathan Graehl. 1998. Machine transliteration. Comput. Linguist., 24(4):599–612, December. Julian Kupiec. 1993. An algorithm for finding noun phrase correspondences in bilingual corpora. In ACL, pages 17–22, Stroudsburg, PA, USA. Wai Lam, Shing-Kit Chan, and Ruizhang Huang. 2007. Named entity translation matching and learning: With application for mining unseen translations. ACM Trans. Inf. Syst., 25(1), February. Dekang Lin, Shaojun Zhao, Benjamin Van Durme, and Marius Pasca. 2008. Mining parenthetical translations from the web by word alignment. In ACL. Thahir Mohamed, Estevam Hruschka, and Tom Mitchell. 2011. Discovering relations between noun categories. In EMNLP, pages 1447–1455, Edinburgh, Scotland, UK., July. Ndapandula Nakashole, Gerhard Weikum, and Fabian M. Suchanek. 2012. PATTY: A Taxonomy of Relational Patterns with Semantic Types. In EMNLP. Patrick Schone, Tim Allison, Chris Giannella, and Craig Pfeifer. 2011. Bootstrapping multilingual relation discovery using english wikipedia and wikimedia-induced entity extraction. In ICTAI, pages 944–951, Washington, DC, USA. Li Shao and Hwee Tou Ng. 2004. Mining new word translations from comparable corpora. In COLING, Stroudsburg, PA, USA. S. Van Dongen. 2000. Graph Clustering by Flow Simulation. Ph.D. thesis, University of Utrecht, The Netherlands. Stephen Wan and Cornelia Maria Verspoor. 1998. Automatic english-chinese name transliteration for development of multilingual resources. In ACL, pages 1352–1356, Stroudsburg, PA, USA. Douglas Brent West. 1999. Introduction to graph theory (2nd edition). Prentice Hall. Fan Yang, Jun Zhao, and Kang Liu. 2009. A chineseenglish organization name translation system using heuristic web mining and asymmetric alignment. In Joint Conference of the ACL and the IJCNLP, pages 387–395, Stroudsburg, PA, USA. Gae-won You, Seung-won Hwang, Young-In Song, Long Jiang, and Zaiqing Nie. 2010. Mining name translations from entity graph mapping. In EMNLP, pages 430–439, Stroudsburg, PA, USA. Gae-Won You, Seung-Won Hwang, Young-In Song, Long Jiang, and Zaiqing Nie. 2012. Efficient entity translation mining: A parallelized graph alignment approach. ACM Trans. Inf. Syst., 30(4):25:1–25:23, November. 640
2013
62
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 641–650, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Transfer Learning Based Cross-lingual Knowledge Extraction for Wikipedia Zhigang Wang†, Zhixing Li†, Juanzi Li†, Jie Tang†, and Jeff Z. Pan‡ † Tsinghua National Laboratory for Information Science and Technology DCST, Tsinghua University, Beijing, China {wzhigang,zhxli,ljz,tangjie}@keg.cs.tsinghua.edu.cn ‡ Department of Computing Science, University of Aberdeen, Aberdeen, UK [email protected] Abstract Wikipedia infoboxes are a valuable source of structured knowledge for global knowledge sharing. However, infobox information is very incomplete and imbalanced among the Wikipedias in different languages. It is a promising but challenging problem to utilize the rich structured knowledge from a source language Wikipedia to help complete the missing infoboxes for a target language. In this paper, we formulate the problem of cross-lingual knowledge extraction from multilingual Wikipedia sources, and present a novel framework, called WikiCiKE, to solve this problem. An instancebased transfer learning method is utilized to overcome the problems of topic drift and translation errors. Our experimental results demonstrate that WikiCiKE outperforms the monolingual knowledge extraction method and the translation-based method. 1 Introduction In recent years, the automatic knowledge extraction using Wikipedia has attracted significant research interest in research fields, such as the semantic web. As a valuable source of structured knowledge, Wikipedia infoboxes have been utilized to build linked open data (Suchanek et al., 2007; Bollacker et al., 2008; Bizer et al., 2008; Bizer et al., 2009), support next-generation information retrieval (Hotho et al., 2006), improve question answering (Bouma et al., 2008; Ferr´andez et al., 2009), and other aspects of data exploitation (McIlraith et al., 2001; Volkel et al., 2006; Hogan et al., 2011) using semantic web standards, such as RDF (Pan and Horrocks, 2007; Heino and Pan, 2012) and OWL (Pan and Horrocks, 2006; Pan and Thomas, 2007; Fokoue et al., 2012), and their reasoning services. However, most infoboxes in different Wikipedia language versions are missing. Figure 1 shows the statistics of article numbers and infobox information for six major Wikipedias. Only 32.82% of the articles have infoboxes on average, and the numbers of infoboxes for these Wikipedias vary significantly. For instance, the English Wikipedia has 13 times more infoboxes than the Chinese Wikipedia and 3.5 times more infoboxes than the second largest Wikipedia of German language. English German French Dutch Spanish Chinese 0 0.5 1 1.5 2 2.5 3 3.5 4 x 10 6 Languages Number of Instances Article Infobox Figure 1: Statistics for Six Major Wikipedias. To solve this problem, KYLIN has been proposed to extract the missing infoboxes from unstructured article texts for the English Wikipedia (Wu and Weld, 2007). KYLIN performs well when sufficient training data are available, and such techniques as shrinkage and retraining have been used to increase recall from English Wikipedia’s long tail of sparse infobox classes (Weld et al., 2008; Wu et al., 2008). The extraction performance of KYLIN is limited by the number of available training samples. Due to the great imbalance between different Wikipedia language versions, it is difficult to gather sufficient training data from a single Wikipedia. Some translation-based cross-lingual knowledge 641 extraction methods have been proposed (Adar et al., 2009; Bouma et al., 2009; Adafre and de Rijke, 2006). These methods concentrate on translating existing infoboxes from a richer source language version of Wikipedia into the target language. The recall of new target infoboxes is highly limited by the number of equivalent cross-lingual articles and the number of existing source infoboxes. Take Chinese-English1 Wikipedias as an example: current translation-based methods only work for 87,603 Chinese Wikipedia articles, 20.43% of the total 428,777 articles. Hence, the challenge remains: how could we supplement the missing infoboxes for the rest 79.57% articles? On the other hand, the numbers of existing infobox attributes in different languages are highly imbalanced. Table 1 shows the comparison of the numbers of the articles for the attributes in template PERSON between English and Chinese Wikipedia. Extracting the missing value for these attributes, such as awards, weight, influences and style, inside the single Chinese Wikipedia is intractable due to the rarity of existing Chinese attribute-value pairs. Attribute en zh Attribute en zh name 82,099 1,486 awards 2,310 38 birth date 77,850 1,481 weight 480 12 occupation 66,768 1,279 influences 450 6 nationality 20,048 730 style 127 1 Table 1: The Numbers of Articles in TEMPLATE PERSON between English(en) and Chinese(zh). In this paper, we have the following hypothesis: one can use the rich English (auxiliary) information to assist the Chinese (target) infobox extraction. In general, we address the problem of crosslingual knowledge extraction by using the imbalance between Wikipedias of different languages. For each attribute, we aim to learn an extractor to find the missing value from the unstructured article texts in the target Wikipedia by using the rich information in the source language. Specifically, we treat this cross-lingual information extraction task as a transfer learning-based binary classification problem. The contributions of this paper are as follows: 1. We propose a transfer learning-based crosslingual knowledge extraction framework 1Chinese-English denotes the task of Chinese Wikipedia infobox completion using English Wikipedia called WikiCiKE. The extraction performance for the target Wikipedia is improved by using rich infoboxes and textual information in the source language. 2. We propose the TrAdaBoost-based extractor training method to avoid the problems of topic drift and translation errors of the source Wikipedia. Meanwhile, some languageindependent features are introduced to make WikiCiKE as general as possible. 3. Chinese-English experiments for four typical attributes demonstrate that WikiCiKE outperforms both the monolingual extraction method and current translation-based method. The increases of 12.65% for precision and 12.47% for recall in the template named person are achieved when only 30 target training articles are available. The rest of this paper is organized as follows. Section 2 presents some basic concepts, the problem formalization and the overview of WikiCiKE. In Section 3, we propose our detailed approaches. We present our experiments in Section 4. Some related work is described in Section 5. We conclude our work and the future work in Section 6. 2 Preliminaries In this section, we introduce some basic concepts regarding Wikipedia, formally defining the key problem of cross-lingual knowledge extraction and providing an overview of the WikiCiKE framework. 2.1 Wiki Knowledge Base and Wiki Article We consider each language version of Wikipedia as a wiki knowledge base, which can be represented as K = {ai}p i=1, where ai is a disambiguated article in K and p is the size of K. Formally we define a wiki article a ∈K as a 5-tuple a = (title, text, ib, tp, C), where • title denotes the title of the article a, • text denotes the unstructured text description of the article a, • ib is the infobox associated with a; specifically, ib = {(attri, valuei)}q i=1 represents the list of attribute-value pairs for the article a, 642 Figure 2: Simplified Article of “Bill Gates”. • tp = {attri}r i=1 is the infobox template associated with ib, where r is the number of attributes for one specific template, and • C denotes the set of categories to which the article a belongs. Figure 2 gives an example of these five important elements concerning the article named “Bill Gates”. In what follows, we will use named subscripts, such as aBill Gates, or index subscripts, such as ai, to refer to one particular instance interchangeably. We will use “name in TEMPLATE PERSON” to refer to the attribute attrname in the template tpP ERSON. In this cross-lingual task, we use the source (S) and target (T) languages to denote the languages of auxiliary and target Wikipedias, respectively. For example, KS indicates the source wiki knowledge base, and KT denotes the target wiki knowledge base. 2.2 Problem Formulation Mining new infobox information from unstructured article texts is actually a multi-template, multi-slot information extraction problem. In our task, each template represents an infobox template and each slot denotes an attribute. In the WikiCiKE framework, for each attribute attrT in an infobox template tpT, we treat the task of missing value extraction as a binary classification problem. It predicts whether a particular word (token) from the article text is the extraction target (Finn and Kushmerick, 2004; Lafferty et al., 2001). Given an attribute attrT and an instance (word/token) xi, XS = {xi}n i=1 and XT = {xi}n+m i=n+1 are the sets of instances (words/tokens) in the source and the target language respectively. xi can be represented as a feature vector according to its context. Usually, we have n ≫m in our setting, with much more attributes in the source that those in the target. The function g : X 7→Y maps the instance from X = XS ∪XT to the true label of Y = {0, 1}, where 1 represents the extraction target (positive) and 0 denotes the background information (negative). Because the number of target instances m is inadequate to train a good classifier, we combine the source and target instances to construct the training data set as TD = TDS ∪TDT , where TDS = {xi, g(xi)}n i=1 and TDT = {xi, g(xi)}n+m i=n+1 represent the source and target training data, respectively. Given the combined training data set TD, our objective is to estimate a hypothesis f : X 7→Y that minimizes the prediction error on testing data in the target language. Our idea is to determine the useful part of TDS to improve the classification performance in TDT . We view this as a transfer learning problem. 2.3 WikiCiKE Framework WikiCiKE learns an extractor for a given attribute attrT in the target Wikipedia. As shown in Figure 3, WikiCiKE contains four key components: (1) Automatic Training Data Generation: given the target attribute attrT and two wiki knowledge bases KS and KT , WikiCiKE first generates the training data set TD = TDS ∪TDT automatically. (2) WikiCiKE Training: WikiCiKE uses a transfer learning-based classification method to train the classifier (extractor) f : X 7→Y by using TDS ∪TDT . (3) Template Classification: WikiCiKE then determines proper candidate articles which are suitable to generate the missing value of attrT. (4) WikiCiKE Extraction: given a candidate article a, WikiCiKE uses the learned extractor f to label each word in the text of a, and generate the extraction result in the end. 3 Our Approach In this section, we will present the detailed approaches used in WikiCiKE. 643 Figure 3: WikiCiKE Framework. 3.1 Automatic Training Data Generation To generate the training data for the target attribute attrT , we first determine the equivalent cross-lingual attribute attrS. Fortunately, some templates in non-English Wikipedia (e.g. Chinese Wikipedia) explicitly match their attributes with their counterparts in English Wikipedia. Therefore, it is convenient to align the cross-lingual attributes using English Wikipedia as bridge. For attributes that can not be aligned in this way, currently we manually align them. The manual alignment is worthwhile because thousands of articles belong to the same template may benefit from it and at the same time it is not very costly. In Chinese Wikipedia, the top 100 templates have covered nearly 80% of the articles which have been assigned a template. Once the aligned attribute mapping attrT ↔ attrS is obtained, we collect the articles from both KS and KT containing the corresponding attr. The collected articles from KS are translated into the target language. Then, we use a uniform automatic method, which primarily consists of word labeling and feature vector generation, to generate the training data set TD = {(x, g(x))} from these collected articles. For each collected article a = {title, text, ib, tp, C} and its value of attr, we can automatically label each word x in text according to whether x and its neighbors are contained by the value. The text and value are processed as bags of words {x}text and {x}value. Then for each xi ∈{x}text we have: g(xi) =          1 xi ∈{x}value, |{x}value| = 1 1 xi−1, xi ∈{x}value or xi, xi+1 ∈{x}value, |{x}value| > 1 0 otherwise (1) After the word labeling, each instance (word/token) is represented as a feature vector. In this paper, we propose a general feature space that is suitable for most target languages. As shown in Table 2, we classify the features used in WikiCiKE into three categories: format features, POS tag features and token features. Category Feature Example Format First token of sentence `} L feature Hello World! In first half of sentence `} L Hello World! Starts with two digits 1231å 31th Dec. Starts with four digits 1999t ) 1999’s summer Contains a cash sign 10åor 10$ Contains a percentage 10% symbol Stop words „, 0, Ù& of, the, a, an Pure number 365 Part of an anchor text 5qü Movie Director Begin of an anchor text 8 ¾¡ Game Designer POS tag POS tag of current token features POS tags of previous 5 tokens POS tags of next 5 tokens Token Current token features Previous 5 tokens Next 5 tokens Is current token contained by title Is one of previous 5 tokens contained by title Table 2: Feature Definition. The target training data TDT is directly generated from articles in the target language Wikipedia. Articles from the source language Wikipedia are translated into the target language in advance and then transformed into training data TDS. In next section, we will discuss how to train an extractor from TD = TDS ∪TDT . 3.2 WikiCiKE Training Given the attribute attrT , we want to train a classifier f : X 7→Y that can minimize the prediction 644 error for the testing data in the target language. Traditional machine learning approaches attempt to determine f by minimizing some loss function L on the prediction f(x) for the training instance x and its real label g(x), which is ˆf = argmin f∈Θ X L(f(x), g(x)) where (x, g(x)) ∈T DT (2) In this paper, we use TrAdaBoost (Dai et al., 2007), which is an instance-based transfer learning algorithm that was first proposed by Dai to find ˆf. TrAdaBoost requires that the source training instances XS and target training instances XT be drawn from the same feature space. In WikiCiKE, the source articles are translated into the target language in advance to satisfy this requirement. Due to the topic drift problem and translation errors, the joint probability distribution PS(x, g(x)) is not identical to PT (x, g(x)). We must adjust the source training data TDS so that they fit the distribution on TDT . TrAdaBoost iteratively updates the weights of all training instances to optimize the prediction error. Specifically, the weight-updating strategy for the source instances is decided by the loss on the target instances. For each t = 1 ∼T iteration, given a weight vector pt normalized from wt(wt is the weight vector before normalization), we call a basic classifier F that can address weighted instances and then find a hypothesis f that satisfies ˆft = argmin f∈ΘF X L(pt, f(x), g(x)) (x, g(x)) ∈T DS ∪T DT (3) Let ǫt be the prediction error of ˆft at the tth iteration on the target training instances TDT , which is ǫt = 1 Pn+m k=n+1 wt k × n+m X k=n+1 (wt k × | ˆft(xk) −yk|) (4) With ǫt, the weight vector wt is updated by the function: wt+1 = h(wt, ǫt) (5) The weight-updating strategy h is illustrated in Table 3. Finally, a final classifier ˆf can be obtained by combining ˆfT/2 ∼ˆfT . TrAdaBoost has a convergence rate of O( p ln(n/N)), where n and N are the number of source samples and number of maximum iterations respectively. TrAdaBoost AdaBoost Target + wt wt samples − wt × β−1 t wt × β−1 t Source + wt × β−1 No source training samples − wt × β sample available +: correctly labelled −: miss-labelled wt: weight of an instance at the tth iteration βt = ǫt × (1 −ǫt) β = 1/(1 + √ 2 ln nT) Table 3: Weight-updating Strategy of TrAdaBoost. 3.3 Template Classification Before using the learned classifier f to extract missing infobox value for the target attribute attrT, we must select the correct articles to be processed. For example, the article aNew Y ork is not a proper article for extracting the missing value of the attribute attrbirth day. If a already has an incomplete infobox, it is clear that the correct tp is the template of its own infobox ib. For those articles that have no infoboxes, we use the classical 5-nearest neighbor algorithm to determine their templates (Roussopoulos et al., 1995) using their category labels, outlinks, inlinks as features (Wang et al., 2012). Our classifier achieves an average precision of 76.96% with an average recall of 63.29%, and can be improved further. In this paper, we concentrate on the WikiCiKE training and extraction components. 3.4 WikiCiKE Extraction Given an article a determined by template classification, we generate the missing value of attr from the corresponding text. First, we turn the text into a word sequence and compute the feature vector for each word based on the feature definition in Section 3.1. Next we use f to label each word, and we get a labeled sequence textl as textl = {xf(x1) 1 ...xf(xi−1) i−1 xf(xi) i xf(xi+1) i+1 ...xf(xn) n } where the superscript f(xi) ∈{0, 1} represents the positive or negative label by f. After that, we extract the adjacent positive tokens in text as the predict value. In particular, the longest positive token sequence and the one that contains other positive token sequences are preferred in extraction. E.g., a positive sequence “comedy movie director” is preferred to a shorter sequence “movie director”. 645 4 Experiments In this section, we present our experiments to evaluate the effectiveness of WikiCiKE, where we focus on the Chinese-English case; in other words, the target language is Chinese and the source language is English. It is part of our future work to try other language pairs which two Wikipedias of these languages are imbalanced in infobox information such as English-Dutch. 4.1 Experimental Setup 4.1.1 Data Sets Our data sets are from Wikipedia dumps2 generated on April 3, 2012. For each attribute, we collect both labeled articles (articles that contain the corresponding attribute attr) and unlabeled articles in Chinese. We split the labeled articles into two subsets AT and Atest(AT ∩Atest = ∅), in which AT is used as target training articles and Atest is used as the first testing set. For the unlabeled articles, represented as A′ test, we manually label their infoboxes with their texts and use them as the second testing set. For each attribute, we also collect a set of labeled articles AS in English as the source training data. Our experiments are performed on four attributes, which are occupation, nationality, alma mater in TEMPLATE PERSON, and country in TEMPLATE FILM. In particular, we extract values from the first two paragraphs of the texts because they usually contain most of the valuable information. The details of data sets on these attributes are given in Table 4. Attribute |AS| |AT| |Atest| |A′ test| occupation 1,000 500 779 208 alma mater 1,000 200 215 208 nationality 1,000 300 430 208 country 1,000 500 1,000 − |A|: the number of articles in A Table 4: Data Sets. 4.1.2 Comparison Methods We compare our WikiCiKE method with two different kinds of methods, the monolingual knowledge extraction method and the translation-based method. They are implemented as follows: 1. KE-Mon is the monolingual knowledge extractor. The difference between WikiCiKE and KE-Mon is that KE-Mon only uses the Chinese training data. 2http://dumps.wikimedia.org/ 2. KE-Tr is the translation-based extractor. It obtains the values by two steps: finding their counterparts (if available) in English using Wikipedia cross-lingual links and attribute alignments, and translating them into Chinese. We conduct two series of evaluation to compare WikiCiKE with KE-Mon and KE-Tr, respectively. 1. We compare WikiCiKE with KE-Mon on the first testing data set Atest, where most values can be found in the articles’ texts in those labeled articles, in order to demonstrate the performance improvement by using crosslingual knowledge transfer. 2. We compare WikiCiKE with KE-Tr on the second testing data set A ′ test, where the existences of values are not guaranteed in those randomly selected articles, in order to demonstrate the better recall of WikiCiKE. For implementation details, the weighted-SVM is used as the basic learner f both in WikiCiKE and KE-Mon (Zhang et al., 2009), and Baidu Translation API3 is used as the translator both in WikiCiKE and KE-Tr. The Chinese texts are preprocessed using ICTCLAS4 for word segmentation. 4.1.3 Evaluation Metrics Following Lavelli’s research on evaluation of information extraction (Lavelli et al., 2008), we perform evaluation as follows. 1. We evaluate each attr separately. 2. For each attr, there is exactly one value extracted. 3. No alternative occurrence of real value is available. 4. The overlap ratio is used in this paper rather than “exactly matching” and “containing”. Given an extracted value v′ = {w′} and its corresponding real value v = {w}, two measurements for evaluating the overlap ratio are defined: recall: the rate of matched tokens w.r.t. the real value. It can be calculated using R(v′, v) = |v ∩v′| |v| 3http://openapi.baidu.com/service 4http://www.ictclas.org/ 646 precision: the rate of matched tokens w.r.t. the extracted value. It can be calculated using P(v′, v) = |v ∩v′| |v′| We use the average of these two measures to evaluate the performance of our extractor as follows: R = avg(Ri(v′, v)) ai ∈Atest P = avg(Pi(v′, v)) ai ∈Atest and vi′ ̸= ∅ The recall and precision range from 0 to 1 and are first calculated on a single instance and then averaged over the testing instances. 4.2 Comparison with KE-Mon In these experiments, WikiCiKE trains extractors on AS ∪AT , and KE-Mon trains extractors just on AT . We incrementally increase the number of target training articles from 10 to 500 (if available) to compare WikiCiKE with KE-Mon in different situations. We use the first testing data set Atest to evaluate the results. Figure 4 and Table 5 show the experimental results on TEMPLATE PERSON and FILM. We can see that WikiCiKE outperforms KE-Mon on all three attributions especially when the number of target training samples is small. Although the recall for alma mater and the precision for nationality of WikiCiKE are lower than KE-Mon when only 10 target training articles are available, WikiCiKE performs better than KE-Mon if we take into consideration both precision and recall. 10 30 50 100 200 300 500 0 0.2 0.4 0.6 0.8 number of target training articles P(KE−Mon) P(WikiCiKE) R(KE−Mon) R(WikiCiKE) (a) occupation 10 30 50 100 200 0.4 0.5 0.6 0.7 0.8 0.9 1 number of target training articles P(KE−Mon) P(WikiCiKE) R(KE−Mon) R(WikiCiKE) (b) alma mater 10 30 50 100 200 300 0.5 0.6 0.7 0.8 0.9 1 number of target training articles P(KE−Mon) P(WikiCiKE) R(KE−Mon) R(WikiCiKE) (c) nationality 10 30 50 100 200 300 500 0 5 10 15 20 percent(%) number of target training articles performance gain P R (d) average improvements Figure 4: Results for TEMPLATE PERSON. Figure 4(d) shows the average improvements yielded by WikiCiKE w.r.t KE-Mon on TEMPLATE PERSON. We can see that WikiCiKE yields significant improvements when only a few articles are available in target language and the improvements tend to decrease as the number of target articles is increased. In this case, the articles in the target language are sufficient to train the extractors alone. # KE-Mon WikiCiKE P R P R 10 81.1% 63.8% 90.7% 66.3% 30 78.8% 64.5% 87.5% 69.4% 50 80.7% 66.6% 87.7% 72.3% 100 82.8% 68.2% 87.8% 72.1% 200 83.6% 70.5% 87.1% 73.2% 300 85.2% 72.0% 89.1% 76.2% 500 86.2% 73.4% 88.7% 75.6% # Number of the target training articles. Table 5: Results for country in TEMPLATE FILM. 4.3 Comparison with KE-Tr We compare WikiCiKE with KE-Tr on the second testing data set A ′ test. From Table 6 it can be clearly observed that WikiCiKE significantly outperforms KE-Tr both in precision and recall. The reasons why the recall of KE-Tr is extremely low are two-fold. First, because of the limit of cross-lingual links and infoboxes in English Wikipedia, only a very small set of values is found by KE-Tr. Furthermore, many values obtained using the translator are incorrect because of translation errors. WikiCiKE uses translators too, but it has better tolerance to translation errors because the extracted value is from the target article texts instead of the output of translators. Attribute KE-Tr WikiCiKE P R P R occupation 27.4% 3.40% 64.8% 26.4% nationality 66.3% 4.60% 70.0% 55.0% alma mater 66.7% 0.70% 76.3% 8.20% Table 6: Results of WikiCiKE vs. KE-Tr. 4.4 Significance Test We conducted a significance test to demonstrate that the difference between WikiCiKE and KEMon is significant rather than caused by statistical errors. As for the comparison between WikiCiKE and KE-Tr, significant improvements brought by 647 WikiCiKE can be clearly observed from Table 6 so there is no need for further significance test. In this paper, we use McNemar’s significance test (Dietterich and Thomas, 1998). Table 7 shows the results of significance test calculated for the average on all tested attributes. When the number of target training articles is less than 100, the χ is much less than 10.83 that corresponds to a significance level 0.001. It suggests that the chance that WikiCiKE is not better than KE-Mon is less than 0.001. # 10 30 50 100 200 300 500 χ 179.5 107.3 51.8 32.8 4.1 4.3 0.3 # Number of the target training articles. Table 7: Results of Significance Test. 4.5 Overall Analysis As shown in above experiments, we can see that WikiCiKE outperforms both KE-Mon and KE-Tr. When only 30 target training samples are available, WikiCiKE reaches comparable performance of KE-Mon using 300-500 target training samples. Among all of the 72 attributes in TEMPLATE PERSON of Chinese Wikipedia, 39 (54.17%) and 55 (76.39%) attributes have less than 30 and 200 labeled articles respectively. We can see that WikiCiKE can save considerable human labor when no sufficient target training samples are available. We also examined the errors by WikiCiKE and they can be categorized into three classes. For attribute occupation when 30 target training samples are used, there are 71 errors. The first category is caused by incorrect word segmentation (40.85%). In Chinese, there is no space between words so we need to segment them before extraction. The result of word segmentation directly decide the performance of extraction so it causes most of the errors. The second category is because of the incomplete infoboxes (36.62%). In evaluation of KE-Mon, we directly use the values in infoboxex as golden values, some of them are incomplete so the correct predicted values will be automatically judged as the incorrect in these cases. The last category is mismatched words (22.54%). The predicted value does not match the golden value or a part of it. In the future, we can improve the performance of WikiCiKE by polishing the word segmentation result. 5 Related Work Some approaches of knowledge extraction from the open Web have been proposed (Wu et al., 2012; Yates et al., 2007). Here we focus on the extraction inside Wikipedia. 5.1 Monolingual Infobox Extraction KYLIN is the first system to autonomously extract the missing infoboxes from the corresponding article texts by using a self-supervised learning method (Wu and Weld, 2007). KYLIN performs well when enough training data are available. Such techniques as shrinkage and retraining are proposed to increase the recall from English Wikipedia’s long tail of sparse classes (Wu et al., 2008; Wu and Weld, 2010). Different from Wu’s research, WikiCiKE is a cross-lingual knowledge extraction framework, which leverags rich knowledge in the other language to improve extraction performance in the target Wikipedia. 5.2 Cross-lingual Infobox Completion Current translation based methods usually contain two steps: cross-lingual attribute alignment and value translation. The attribute alignment strategies can be grouped into two categories: cross-lingual link based methods (Bouma et al., 2009) and classification based methods (Adar et al., 2009; Nguyen et al., 2011; Aumueller et al., 2005; Adafre and de Rijke, 2006; Li et al., 2009). After the first step, the value in the source language is translated into the target language. E. Adar’s approach gives the overall precision of 54% and recall of 40% (Adar et al., 2009). However, recall of these methods is limited by the number of equivalent cross-lingual articles and the number of infoboxes in the source language. It is also limited by the quality of the translators. WikiCiKE attempts to mine the missing infoboxes directly from the article texts and thus achieves a higher recall compared with these methods as shown in Section 4.3. 5.3 Transfer Learning Transfer learning can be grouped into four categories: instance-transfer, feature-representationtransfer, parameter-transfer and relationalknowledge-transfer (Pan and Yang, 2010). TrAdaBoost, the instance-transfer approach, is an extension of the AdaBoost algorithm, and demonstrates better transfer ability than tradition648 al learning techniques (Dai et al., 2007). Transfer learning have been widely studied for classification, regression, and cluster problems. However, few efforts have been spent in the information extraction tasks with knowledge transfer. 6 Conclusion and Future Work In this paper we proposed a general cross-lingual knowledge extraction framework called WikiCiKE, in which extraction performance in the target Wikipedia is improved by using rich infoboxes in the source language. The problems of topic drift and translation error were handled by using the TrAdaBoost model. Chinese-English experimental results on four typical attributes showed that WikiCiKE significantly outperforms both the current translation based methods and the monolingual extraction methods. In theory, WikiCiKE can be applied to any two wiki knowledge based of different languages. We have been considering some future work. Firstly, more attributes in more infobox templates should be explored to make our results much stronger. Secondly, knowledge in a minor language may also help improve extraction performance for a major language due to the cultural and religion differences. A bidirectional cross-lingual extraction approach will also be studied. Last but not least, we will try to extract multiple attr-value pairs at the same time for each article. Furthermore, our work is part of a more ambitious agenda on exploitation of linked data. On the one hand, being able to extract data and knowledge from multilingual sources such as Wikipedia could help improve the coverage of linked data for applications. On the other hand, we are also investigating how to possibly integrate information, including subjective information (Sensoy et al., 2013), from multiple sources, so as to better support data exploitation in context dependent applications. Acknowledgement The work is supported by NSFC (No. 61035004), NSFC-ANR (No. 61261130588), 863 High Technology Program (2011AA01A207), FP7-288342, FP7 K-Drive project (286348), the EPSRC WhatIf project (EP/J014354/1) and THU-NUS NExT CoLab. Besides, we gratefully acknowledge the assistance of Haixun Wang (MSRA) for improving the paper work. References S. Fissaha Adafre and M. de Rijke. 2006. Finding Similar Sentences across Multiple Languages in Wikipedia. EACL 2006 Workshop on New Text: Wikis and Blogs and Other Dynamic Text Sources. Sisay Fissaha Adafre and Maarten de Rijke. 2005. Discovering Missing Links in Wikipedia. Proceedings of the 3rd International Workshop on Link Discovery. Eytan Adar, Michael Skinner and Daniel S. Weld. 2009. Information Arbitrage across Multi-lingual Wikipedia. WSDM’09. David Aumueller, Hong Hai Do, Sabine Massmann and Erhard Rahm”. 2005. Schema and ontology matching with COMA++. SIGMOD Conference’05. Christian Bizer, Jens Lehmann, Georgi Kobilarov, S¨oren Auer, Christian Becker, Richard Cyganiak and Sebastian Hellmann. 2009. DBpedia - A crystallization Point for the Web of Data. J. Web Sem.. Christian Bizer, Tom Heath, Kingsley Idehen and Tim Berners-Lee. 2008. Linked data on the web (LDOW2008). WWW’08. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge and Jamie Taylor. 2008. Freebase: a Collaboratively Created Graph Database for Structuring Human Knowledge. SIGMOD’08. Gosse Bouma, Geert Kloosterman, Jori Mur, Gertjan Van Noord, Lonneke Van Der Plas and Jorg Tiedemann. 2008. Question Answering with Joost at CLEF 2007. Working Notes for the CLEF 2008 Workshop. Gosse Bouma, Sergio Duarte and Zahurul Islam. 2009. Cross-lingual Alignment and Completion of Wikipedia Templates. CLIAWS3 ’09. Wenyuan Dai, Qiang Yang, Gui-Rong Xue and Yong Yu. 2007. Boosting for Transfer Learning. ICML’07. Dietterich and Thomas G. 1998. Approximate Statistical Tests for Comparing Supervised Classification Learning Algorithms. Neural Comput.. Sergio Ferr´andez, Antonio Toral, ´ıscar Ferr´andez, Antonio Ferr´andez and Rafael Mu˜noz. 2009. Exploiting Wikipedia and EuroWordNet to Solve CrossLingual Question Answering. Inf. Sci.. Aidan Finn and Nicholas Kushmerick. 2004. Multilevel Boundary Classification for Information Extraction. ECML. Achille Fokoue, Felipe Meneguzzi, Murat Sensoy and Jeff Z. Pan. 2012. Querying Linked Ontological Data through Distributed Summarization. Proc. of the 26th AAAI Conference on Artificial Intelligence (AAAI2012). 649 Yoav Freund and Robert E. Schapire. 1997. A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting. J. Comput. Syst. Sci.. Norman Heino and Jeff Z. Pan. 2012. RDFS Reasoning on Massively Parallel Hardware. Proc. of the 11th International Semantic Web Conference (ISWC2012). Aidan Hogan, Jeff Z. Pan, Axel Polleres and Yuan Ren. 2011. Scalable OWL 2 Reasoning for Linked Data. Reasoning Web. Semantic Technologies for the Web of Data. Andreas Hotho, Robert J¨aschke, Christoph Schmitz and Gerd Stumme. 2006. Information Retrieval in Folksonomies: Search and Ranking. ESWC’06. John D. Lafferty, Andrew McCallum and Fernando C. N. Pereira. 2001. Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data. ICML’01. Alberto Lavelli, MaryElaine Califf, Fabio Ciravegna, Dayne Freitag, Claudio Giuliano, Nicholas Kushmerick, Lorenza Romano and Neil Ireson. 2008. Evaluation of Machine Learning-based Information Extraction Algorithms: Criticisms and Recommendations. Language Resources and Evaluation. Juanzi Li, Jie Tang, Yi Li and Qiong Luo. 2009. RiMOM: A Dynamic Multistrategy Ontology Alignment Framework. IEEE Trans. Knowl. Data Eng.. Xiao Ling, Gui-Rong Xue, Wenyuan Dai, Yun Jiang, Qiang Yang and Yong Yu. 2008. Can Chinese Web Pages be Classified with English Data Source?. WWW’08. Sheila A. McIlraith, Tran Cao Son and Honglei Zeng. 2001. Semantic Web Services. IEEE Intelligent Systems. Thanh Hoang Nguyen, Viviane Moreira, Huong Nguyen, Hoa Nguyen and Juliana Freire. 2011. Multilingual Schema Matching for Wikipedia Infoboxes. CoRR. Jeff Z. Pan and Edward Thomas. 2007. Approximating OWL-DL Ontologies. 22nd AAAI Conference on Artificial Intelligence (AAAI-07). Jeff Z. Pan and Ian Horrocks. 2007. RDFS(FA): Connecting RDF(S) and OWL DL. IEEE Transaction on Knowledge and Data Engineering. 19(2): 192 206. Jeff Z. Pan and Ian Horrocks. 2006. OWL-Eu: Adding Customised Datatypes into OWL. Journal of Web Semantics. Sinno Jialin Pan and Qiang Yang. 2010. A Survey on Transfer Learning. IEEE Trans. Knowl. Data Eng.. Nick Roussopoulos, Stephen Kelley and Fr´ed´eric Vincent. 1995. Nearest Neighbor Queries. SIGMOD Conference’95. Murat Sensoy, Achille Fokoue, Jeff Z. Pan, Timothy Norman, Yuqing Tang, Nir Oren and Katia Sycara. 2013. Reasoning about Uncertain Information and Conflict Resolution through Trust Revision. Proc. of the 12th International Conference on Autonomous Agents and Multiagent Systems (AAMAS2013). Fabian M. Suchanek, Gjergji Kasneci and Gerhard Weikum. 2007. Yago: a Core of Semantic Knowledge. WWW’07. Max Volkel, Markus Krotzsch, Denny Vrandecic, Heiko Haller and Rudi Studer. 2006. Semantic Wikipedia. WWW’06. Zhichun Wang, Juanzi Li, Zhigang Wang and Jie Tang. 2012. Cross-lingual Knowledge Linking across Wiki Knowledge Bases. 21st International World Wide Web Conference. Daniel S. Weld, Fei Wu, Eytan Adar, Saleema Amershi, James Fogarty, Raphael Hoffmann, Kayur Patel and Michael Skinner. 2008. Intelligence in Wikipedia. AAAI’08. Fei Wu and Daniel S. Weld. 2007. Autonomously Semantifying Wikipedia. CIKM’07. Fei Wu and Daniel S. Weld. 2010. Open Information Extraction Using Wikipedia. ACL’10. Fei Wu, Raphael Hoffmann and Daniel S. Weld. 2008. Information Extraction from Wikipedia: Moving down the Long Tail. KDD’08. Wentao Wu, Hongsong Li, Haixun Wang and Kenny Qili Zhu. 2012. Probase: a Probabilistic Taxonomy for Text Understanding. SIGMOD Conference’12. Alexander Yates, Michael Cafarella, Michele Banko, Oren Etzioni, Matthew Broadhead and Stephen Soderland. 2007. TextRunner: Open Information Extraction on the Web. NAACL-Demonstrations’07. Xinfeng Zhang, Xiaozhao Xu, Yiheng Cai and Yaowei Liu. 2009. A Weighted Hyper-Sphere SVM. ICNC(3)’09. 650
2013
63
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 651–659, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Bridging Languages through Etymology: The case of cross language text categorization Vivi Nastase and Carlo Strapparava Human Language Technologies, Fondazione Bruno Kessler Trento, Italy {nastase, strappa}@fbk.eu Abstract We propose the hypothesis that word etymology is useful for NLP applications as a bridge between languages. We support this hypothesis with experiments in crosslanguage (English-Italian) document categorization. In a straightforward bag-ofwords experimental set-up we add etymological ancestors of the words in the documents, and investigate the performance of a model built on English data, on Italian test data (and viceversa). The results show not only statistically significant, but a large improvement – a jump of almost 40 points in F1-score – over the raw (vanilla bag-ofwords) representation. 1 Introduction When exposed to a document in a language he does not know, a reader might be able to glean some meaning from words that are the same (e.g. names) or similar to those in a language he knows. As an example, let us say that an Italian speaker is reading an English text that contains the word expense, which he does not know. He may be reminded however of the Latin word expensa which is also the etymological root of the Italian word spesa, which usually means “cost”/”shopping”, and may thus infer that the English word refers to the cost of things. In the experiments presented here we investigate whether an automatic text categorization system could benefit from knowledge about the etymological roots of words. The cross language text categorization (CLTC) task consists of categorizing documents in a target language Lt using a model built from labeled examples in a source language Ls. The task becomes more difficult when the data consists of comparable corpora in the two languages – documents on the same topics (e.g. sports, economy) – instead of parallel corpora – there exists a one-to-one correspondence between documents in the corpora for the two languages, one document being the translation of the other. To test the usefulness of etymological information we work with comparable collections of news articles in English and Italian, whose articles are assigned one of four categories: culture and school, tourism, quality of life, made in Italy. We perform a progression of experiments, which embed etymological information deeper and deeper into the model. We start with the basic set-up, representing the documents as bag-of-words, where we train a model on the English training data, and use this model to categorize documents from the Italian test data (and viceversa). The results are better than random, but quite low. We then add the etymological roots of the words in the data to the bag-of-words, and notice a large – 21 points – increase in performance in terms of F1-score. We then use the bag-of-words representation of the training data to build a semantic space using LSA, and use the generated word vectors to represent the training and test data. The improvement is an additional 16 points in F1-score. Compared to related work, presented in Section 3, where cross language text categorization is approached through translation or mapping of features (i.e. words) from the source to the target language, word etymologies are a novel source of cross-lingual knowledge. Instead of mapping features between languages, we introduce new features which are shared, and thus do not need translation or other forms of mapping. The experiments presented show unequivocally that word etymology is a useful addition to computational models, just as they are to readers who have such knowledge. This is an interesting and useful result, especially in the current research landscape where using and exploiting multi-linguality is a desired requirement. 651 morpheme relation related morpheme eng: exrel:etymological origin of eng: excentric eng: expense rel:etymology lat: expensa eng: -ly rel:etymological origin of eng: absurdly eng: -ly rel:etymological origin of eng: admirably ... ita: spesa rel:etymology lat: expensa ita: spesa rel:has derived form ita: spese ... ita: spesare rel:etymologically related ita: spesa ... lat: expensa rel:etymological origin of eng: expense lat: expensa rel:etymological origin of ita: spesa ... lat: expensa rel:is derived from lat: expensus ... English: muscle ↓ French: muscle ↓ Latin: musculus ↓ Latin: mus ↓ Proto Indo-European: muh2s Figure 1: Sample entries from the Etymological WordNet, and a few etymological layers 2 Word Etymology Word etymology gives us a glimpse into the evolution of words in a language. Words may be adopted from a language because of cultural, scientific, economic, political or other reasons (Hitchings, 2009). In time these words “adjust” to the language that adopted them – their sense may change to various degrees – but they are still semantically related to their etymological roots. To illustrate the point, we show an example that the reader, too, may find amusing: on the ticket validation machine on Italian buses, by way of instruction, it is written Per obliterare il biglietto .... A native/frequent English speaker would most probably key in on, and be puzzled by, the word obliterare, very similar to the English obliterate, whose most used sense is to destroy completely / cause to physically disappear . The Italian obliterare has the “milder” sense of cancellare – cancel (which is also shared by the English obliterate, but is less frequent according to Merriam-Webster), and both come from the Latin obliterare – erase, efface, cause to disappear. While there has been some sense migration – in English the more (physically) destructive sense of the word has higher prominence, while in Italian the word is closer in meaning to its etymological root – the Italian and the English words are still semantically related. Dictionaries customarily include etymological information for their entries, and recently, Wikipedia’s Wiktionary has joined this trend. The etymological information can, and indeed has been extracted and prepared for machine consumption (de Melo and Weikum, 2010): Etymological WordNet1 contains 6,031,431 entries for 2,877,036 words (actually, morphemes) in 397 languages. A few sample entries from this resource are shown in Figure 1. The information in Etymological WordNet is organized around 5 relations: etymology with its inverse etymological origin of; is derived from with its inverse has derived form; and the symmetrical etymologically related. The etymology relation links a word with its etymological ancestors, and it is the relation used in the experiments presented here. Prefixes and suffixes – such as exand -ly shown in Figure 1 – are filtered out, as they bring in much noise by relating words that merely share such a morpheme (e.g. absurdly and admirably) but are otherwise semantically distant. has derived form is also used, to capture morphological variations. The depth of the etymological hierarchy (considering the etymology relations) is 10. Figure 1 shows an example of a word with several levels of etymological ancestry. 1http://www1.icsi.berkeley.edu/ ˜demelo/etymwn/ 652   English texts Italian texts te 1 te 2 · · · te n−1 te n ti 1 ti 2 · · · ti m−1 ti m we 1 0 1 · · · 0 1 0 0 · · · English Lexicon we 2 1 1 · · · 1 0 0 ... ... . . . . . . . . . . . . . . . . . . . . . . . . ... 0 ... we p−1 0 1 · · · 0 0 ... 0 we p 0 1 · · · 0 0 · · · 0 0 shared names and words we/i 1 1 0 · · · 0 0 0 0 · · · 0 1 ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . common etymology wetym 1 0 1 · · · 0 0 0 0 · · · 1 0 ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . wi 1 0 0 · · · 0 1 · · · 1 1 Italian Lexicon wi 2 0 ... 1 1 · · · 0 1 ... ... 0 ... . . . . . . . . . . . . . . . . . . . . . . . . . wi q−1 ... 0 0 1 · · · 0 1 wi q · · · 0 0 0 1 · · · 1 0   Figure 2: Multilingual word-by-document matrix 3 Cross Language Text Categorization Text categorization (also text classification), “the task of automatically sorting a set of documents into categories (or classes or topics) from a predefined set” (Sebastiani, 2005), allows for the quick selection of documents from the same domain, or the same topic. It is a very well research area, dating back to the 60s (Borko and Bernick, 1962). The most frequently, and successfully, used document representation is the bag-of-words (BoWs). Results using this representation achieve accuracy in the 90%s. Most variations include feature filtering or weighing, and variations in learning algorithms (Sebastiani, 2005). Within the area of cross-language text categorization (CLTC) several methods have been explored for producing the model for a target language Lt using information and data from the source language Ls. In a precursor task to CLTC, cross language information retrieval (CLIR), Dumais et al. (1997) find semantic correspondences in parallel (different language) corpora through latent semantic analysis (LSA). Most CLTC methods rely heavily on machine translation (MT). MT has been used: to cast the cross-language text categorization problem to the monolingual setting (Fortuna and Shawe-Taylor, 2005); to cast the cross-language text categorization problem into two monolingual settings for active learning (Liu et al., 2012); to translate and adapt a model built on language Ls to language Lt (Rigutini et al., 2005), (Shi et al., 2010); to produce parallel corpora for multi-view learning (Guo and Xiao, 2012). Wan et al. (2011) also use machine translation, but enhance the processing through domain adaptation by feature weighing, assuming that the training data in one language and the test data in the other come from different domains, or can exhibit different linguistic phenomena due to linguistic and cultural differences. Prettenhofer and Stein (2010) use a word translation oracle to produce pivots – pairs of semantically similar words – and use the data partitions induced by these words to find cross language structural correspondences. In a computationally lighter framework, not dependent on MT, Gliozzo and Strapparava (2006) and Wu et al. (2008) use bilingual lexicons and aligned WordNet synsets to obtain shared features between the training data in language Ls and the testing data in language Lt. Gliozzo and Strapparava (2005), the first to use comparable as op653 posed to parallel corpora for CLTC, use LSA to build multilingual domain models. The bag-of-word document representation maps a document di from a corpus D into a kdimensional space Rk, where k is the dimension of the (possibly filtered) vocabulary of the corpus: W = {w1, ..., wk}. Position j in the vector representation of di corresponds to word wj, and it may have different values, among the most commonly used being: binary values – wj appears (1) or not (0) in di; frequency of occurrence of wj in di, absolute or normalized (relative to the size of the document or the size of the vocabulary); the tf ∗idf(wj, di, D). For the task of cross language text categorization, the problem of sharing a model across languages is that the dimensions, a.k.a the vocabulary, of the two languages are largely different. Limited overlap can be achieved through shared names and words. As we have seen in the literature review, machine translation and bilingual dictionaries can be used to cast these dimensions from the source language Ls to the target language Lt. In this work we explore expanding the shared dimensions through word etymologies. Figure 2 shows schematically the binary k dimensional representation for English and Italian data, and shared dimensions. Cross language text categorization could be used to obtain comparable corpora for building translation models. In such a situation, relying on a framework that itself relies on machine translation is not helpful. Bilingual lexicons are available for frequently studied languages, but less so for those poorer in resources. Considering such shortcomings, we look into additional linguistic information, in particular word etymology. This information impacts the data representation, by introducing new shared features between the different language corpora without the need for translation or other forms of mapping. The newly produced representation can be used in conjunction with any of the previously proposed algorithms. Word etymologies are a novel source of linguistic information in NLP, possibly because resources that capture this information in a machine readable format are also novel. Fang et al. (2009) used limited etymological information extracted from the Collins English Dictionary (CED) for text categorization on the British National Corpus (BNC): information on the provenance of words (ranges of probability distribution of etymologies in different versions of Latin – New Latin, Late Latin, Medieval Latin) was used in a “home-made” range classifier. The experiments presented in this paper use the bag-of-word document representation with absolute frequency values. To this basic representation we add word etymological ancestors and run classification experiments. We then use LSA – previously shown by (Dumais et al., 1997) and (Gliozzo and Strapparava, 2005) to be useful for this task – to induce the latent semantic dimensions of documents and words respectively, hypothesizing that word etymological ancestors will lead to semantic dimensions that transcend language boundaries. The vectors obtained through LSA (on the training data only) for words that are shared by the English training data and the Italian test data (names, and most importantly, etymological ancestors of words in the original documents) are then used for rerepresenting the training and test data. The same process is applied for Italian training and English test data. Classification is done using support vector machines (SVMs). 3.1 Data The data we work with consists of comparable corpora of news articles in English and Italian. Each news article is annotated with one of the four categories: culture and school, tourism, quality of life, made in Italy. Table 1 shows the dataset statistics. The average document length is approximately 300 words. 3.2 Raw cross-lingual text categorization As is commonly done in text categorization (Sebastiani, 2005), the documents in our data are represented as bag-of-words, and classification is done using support vector machines (SVMs). One experimental run consists of 4 binary experiments – one class versus the rest, for each of the 4 classes. The results are reported through micro-averaged precision, recall and F1-score for the targeted class, as well as overall accuracy. The high results, on a par with text categorization experiments in the field, validates our experimental set-up. For the cross language categorization experiments described in this paper, we use the data described above, and train on one language (English/Italian), and test on the other, using the same 654 English Italian Categories Training Test Total Training Test Total quality of life 5759 1989 7748 5781 1901 7682 made in Italy 5711 1864 7575 6111 2068 8179 tourism 5731 1857 7588 6090 2015 8105 culture and school 3665 1245 4910 6284 2104 8388 Total 20866 6955 27821 24266 8088 32354 Table 1: Dataset statistics monolingual BoW categorization Prec Rec F1 Acc Train EN / Test EN 0.92 0.92 0.92 0.96 Train IT / Test IT 0.94 0.94 0.94 0.97 Table 2: Performance for monolingual raw text categorization experimental set-up as for the monolingual scenario (4 binary problems). The categorization baseline (BoW baseline in Figure 4) was obtained in this set-up. This baseline is higher than the random baseline or the positive class baseline2 (all instances are assigned the target class in each of the 4 binary classification experiments) due to shared words and names between the two languages. 3.3 Enriching the bag-of-word representation with word etymology As personal experience has shown us that etymological information is useful for comprehending a text in a different language, we set out to test whether this information can be useful in an automatic processing setting. We first verified whether the vocabularies of our two corpora, English and Italian, have shared word etymologies. Relying on word etymologies from the Etymological dictionary, we found that from our data’s vocabulary, 518 English terms and 543 Italian terms shared 490 direct etymological ancestors. Etymological ancestors also help cluster related terms within one language – 887 etymological ancestors for 4727 English and 864 ancestors for 5167 Italian terms. This overlap further increases when adding derived forms (through the has derived form relation). The fact that this overlap exists strengthens the motivation to try using etymological ancestors for the task of text categorization. In this first step of integrating word etymology 2In this situation the random and positive class baseline are the same: 25% F1 score. into the experiment, we extract for each word in each document in the dataset its ancestors from the Etymological dictionary. Because each word wj in a document di has associated an absolute frequency value fij (the number of occurrences of wj in di), for the added etymological ancestors ek in document Di we associate as value the sum of frequencies of their etymological children in di: fiek = X wj∈di wjetymology ek fij We make the depth of extraction a parameter, and generate data representation when considering only direct etymological antecedents (depth 1) and then up to a distance of N. For our dataset we noticed that the representation does not change after N=4, so this is the maximum depth we consider. The bag-of-words representation for each document is expanded with the corresponding etymological features. expansion training data vocabulary size vocabulary overlap with testing Train EN /Test IT raw 71122 14207 (19.9%) depth 1 78936 18275 (23.1%) depth 2 79068 18359 (23.2%) depth 3 79100 18380 (23.2%) depth 4 79103 18382 (23.2%) Train IT /Test EN raw 78750 14110 (17.9%) depth 1 83656 18682 (22.3%) depth 2 83746 18785 (22.4%) depth 3 83769 18812 (22.5%) depth 4 83771 18814 (22.5%) Table 3: Feature expansion with word etymologies Table 3 shows the training data vocabulary size and increase in the overlap between the training and test data with the addition of etymological fea655 tures. The increase is largest when introducing the immediate etymological ancestors, of approximately 4000 new (overlapping) features for both combinations of training and testing. Without etymological features the overlap was approximately 14000 for both configurations. The results obtained with this enriched BoW representation for etymological ancestor depth 1, 2 and 3 are presented in Figure 4. 3.4 Cross-lingual text categorization in a latent semantic space adding etymology Shared word etymologies can serve as a bridge between two languages as we have seen in the previous configuration. When using shared word etymologies in the bag-of-words representation, we only take advantage of the shallow association between these new features and the classes within which they appear. But through the co-occurrence of the etymological features and other words in different documents in the training data, we can induce a deeper representation for the words in a document, that captures better the relationship between the features (words) and the classes to which the documents belong. We use latent semantic analysis (LSA) (Deerwester et al., 1990) to perform this representational transformation. The process relies on the assumption that word co-occurrences across different documents are the surface manifestation of shared semantic dimensions. Mathematically, the ⟨word × document⟩ matrix D is expressed as a product of three matrices: D = V ΣU T by performing singular value decomposition (SVD). V would correspond roughly to a ⟨word × latent semantic dimension⟩matrix, U T is the transposed of a ⟨document × latent semantic dimension⟩matrix, and Σ is a diagonal matrix whose values are indicative of the “strength” of the semantic dimensions. By reducing the size of Σ, for example by selecting the dimensions with the top K values, we can obtain an approximation of the original matrix D ≈DK = VKΣKU T K, where we restrict the latent semantic dimensions taken into account to the K chosen ones. Figure 3 shows schematically the process. We perform this decomposition and dimension reduction step on the ⟨word × document⟩matrix built from the training data only, and using K=400. Both the training and test data are then reduction SVD and dimension dimension latent semantic dimension latent semantic dimension latent semantic K x K dimension latent semantic words V x D K x D documents documents V x K words x x Figure 3: Schematic view of LSA re-represented through the new word vectors from matrix VK. Because the LSA space was built only from the training data, only the shared words and shared etymological ancestors are used to produce representations of the test data. The categorization is done again with SVM. The results of this experiment are shown in Figure 4, together with an LSA baseline – using the raw data and relying on shared words and names as overlap. 4 Discussion The experiments whose results we present here were produced using unfiltered data – all words in the datasets, all etymological ancestors up to the desired depth, no filtering based on frequency of occurrence. Feature filtering is commonly done in machine learning when the data has many features, and in text categorization when using the bag-ofwords representation in particular. We chose not to perform this step for two main reasons: (i) filtering is sensitive to the chosen threshold; (ii) LSA thrives on word co-occurrences, which would be drastically reduced by word removal. The point that etymology information is a useful addition to the task of cross-language text categorization can be made without finding the optimal filtering setup. The baseline experiments show that despite the relatively large word overlap (approx. 14000 terms), cross-language text categorization gives low results. Adding a first batch of etymological information – approximately 4000 shared immediate ancestors – leads to an increase of 18 points in terms of F1-score on the BoW experimental set-up for English training/Italian testing, and 21 points for Italian training/English testing. Further additions of etymological ancestors at depths 2 and 3 results in an increase of 21 points in terms of F1-score for English training/Italian testing, and 27 points for Italian training/English testing. The higher increase in performance on this experimental configuration for Italian training/English testing is explained by the higher term overlap be656 0.4 0.5 0.6 0.7 0.8 0.9 1 F1−score Italian training, English testing 0.4 0.5 0.6 0.7 0.8 0.9 1 F1−score English training, Italian testing 0.4 0.5 0.6 0.7 0.8 0.9 1 Accuracy Italian training, English testing 0.4 0.5 0.6 0.7 0.8 0.9 1 Accuracy English training, Italian testing 0.42 0.83 0.79 0.65 0.43 BoW_etym BoW_etym LSA_etym LSA_etym BoW_etym LSA_etym LSA_etym BoW_etym depth=1 depth=2 depth=3 0.54 0.84 0.87 0.82 0.89 0.74 0.69 0.80 0.64 BoW_baseline LSA_baseline 0.72 0.71 Figure 4: CLTC results with etymological features tween the training and test data, as evidenced by the statistics in Table 3. The next processing step induced a representation of the shared words that encodes deeper level dependencies between words and documents based on word co-occurrences in documents. The LSA space built on the training data leads to a vector representation of the shared words, including the shared etymological ancestors, that captures more than the obvious word-document cooccurrences. Using this representation leads to a further increase of 15 points in F1-score for English training/Italian testing set-up over the BoW representation, and 14 points over the baseline LSA-based categorization. The increase for the Italian training/English testing is 5 points over the BoW representation, but 20 points over the baseline LSA. We saw that the high performance BoW on Italian training/English testing is due to the high term overlap. The clue to why the increase when using LSA is lower than for English training/Italian testing is in the way LSA operates – it relies heavily on word co-occurrences in finding the latent semantic dimensions of documents and words. We expect then that in the Italian training collection, words are “less shared” among documents, which means a lower average document frequency. Figure 5 shows the changes in average document frequency for the two training collections, starting with the raw data (depth 0), and with additional etymological features. 50 60 70 80 90 100 110 120 130 140 0 1 2 3 4 Average DF Etymology depth Average document frequency for words in the training data EN IT Figure 5: Document frequency changes with the addition of etymological features The shape of the document frequency curves mirror the LSA results – the largest increase is the effect of adding the set of direct etymological ancestors, and additions of further, more distant, ancestors lead to smaller improvements. 657 We have performed the experiments described above on two releases of the Etymological dictionary. The results described in the paper were obtained on the latest release (February 2013). The difference in results on the two dictionary versions was significant: a 4 and 5 points increase respectively in micro-averaged F1-score in the bag-ofwords setting for English training/Italian testing and Italian training/English testing, and a 2 and 6 points increase in the LSA setting. This indicates that more etymological information is better, and the dynamic nature of Wikipedia and the Wiktionary could lead to an ever increasing and better etymological resource for NLP applications. 5 Conclusion The motivation for this work was to test the hypothesis that information about word etymology is useful for computational approaches to language, in particular for text classification. Cross-language text classification can be used to build comparable corpora in different languages, using a single language starting point, preferably one with more resources, that can thus spill over to other languages. The experiments presented have shown clearly that etymological ancestors can be used to provide the necessary bridge between the languages we considered – English and Italian. Models produced on English data when using etymological information perform with high accuracy (89%) and high F1-score (80) on Italian test data, with an increase of almost 40 points over a simple bag-of-words model, which, for crossing language boundaries, relies exclusively on shared names and words. Training on Italian data and testing on English data performed almost as well (87% accuracy, 75 F1-score). We plan to expand our experiments to more languages with shared etymologies, and investigate what characteristics of languages and data indicate that etymological information is beneficial for the task at hand. We also plan to explore further uses for this language bridge, at a finer semantic level. Monolingual and cross-lingual textual entailment in particular would be interesting applications, because they require finding shared meaning on two text fragments. Word etymologies would allow recognizing words with shared ancestors, and thus with shared meaning, both within and across languages. Acknowledgements We thank the reviewers for the helpful comments. This work was financially supported by the ECfunded project EXCITEMENT – EXploring Customer Interactions through Textual EntailMENT FP7 ICT-287923. Carlo Strapparava was partially supported by the PerTe project (Trento RISE). References Harold Borko and Myrna Bernick. 1962. Automatic Document Classification. System Development Corporation, Santa Monica, CA. Gerard de Melo and Gerhard Weikum. 2010. Towards universal multilingual knowledge bases. In Principles, Construction, and Applications of Multilingual Wordnets. Proceedings of the 5th Global WordNet Conference (GWC 2010), pages 149–156, New Delhi, India. Scott Deerwester, Susan T. Dumais, George W. Furnas, Thomas K. Landauer, and Richard Harshman. 1990. Indexing by latent semantic analysis. Journal of the American Socienty for Information Science, 41(6):391–407. Susan T. Dumais, Todd A. Letsche, Michael L. Littman, and Thomas K. Landauer. 1997. Automatic cross-language retrieval using latent semantic indexing. In AAAI Symposium on CrossLanguage Text and Speech Retrieval. Alex Chengyu Fang, Wanyin Li, and Nancy Ide. 2009. Latin etymologies as features on BNC text categorization. In 23rd Pacific Asia Conference on Language, Information and Computation (PACLIC 2009), pages 662–669. Blaz Fortuna and John Shawe-Taylor. 2005. The use of machine translation tools for cross-lingual text mining. In Learning with multiple views – Workshop at the 22nd International Conference on Machine Learning (ICML 2005). Alfio Gliozzo and Carlo Strapparava. 2005. Cross language text categorization by acquiring multilingual domain models from comparable corpora. In Proceedings of the ACL Workshop on Building and Using Parallel Texts. Alfio Gliozzo and Carlo Strapparava. 2006. Exploiting comparable corpora and bilingual dictionaries for cross-language text categorization. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics (COLING-ACL 2006), pages 553–560, Sydney, Australia. 658 Yuhong Guo and Min Xiao. 2012. Cross language text classification via subspace co-regularized multiview learning. In Proceedings of the 29th International Conference on Machine Learning (ICML 2012), Edinburgh, Scotland, UK. Henry Hitchings. 2009. The Secret Life of Words: How English Became English. John Murray Publishers. Yue Liu, Lin Dai, Weitao Zhou, and Heyan Huang. 2012. Active learning for cross language text categorization. In Proceedings of the 16th Pacific-Asia conference on Advances in Knowledge Discovery and Data Mining (PAKDD 2012), pages 195–206, Kuala Lumpur, Malaysia. Peter Prettenhofer and Benno Stein. 2010. Crosslanguage text classification using structural correspondence learning. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL 2010), pages 1118–1127, Uppsala, Sweden. Leonardo Rigutini, Marco Maggini, and Bing Liu. 2005. An EM based training algorithm for crosslanguage text categorization. In Proceedings of the International Conference on Web Intelligence (WI 2005), pages 200–206, Compiegne, France. Fabrizio Sebastiani. 2005. Text categorization. In Alessandro Zanasi, editor, Text Mining and its Applications, pages 109–129. WIT Press, Southampton, UK. Lei Shi, Rada Mihalcea, and Minhgjun Tian. 2010. Cross language text classification by model translation and semi-supervised learning. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL 2010), pages 1057–1067, Uppsala, Sweden. Chang Wan, Rong Pan, and Jifei Li. 2011. Biweighting domain adaptation for cross-language text classification. In Proceedings of the 22nd International Joint Conference on Artificial Intelligence (IJCAI 2011), pages 1535–1540, Barcelona, Catalonia, Spain. Ke Wu, Xiaolin Wang, and Bao-Liang Lu. 2008. Cross language text categorization using a bilingual lexicon. In Third International Joint Conference on Natural Language Processing (IJCNLP 2008), pages 165–172, Hyderabad, India. 659
2013
64
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 660–670, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Creating Similarity: Lateral Thinking for Vertical Similarity Judgments Tony Veale Guofu Li Web Science and Technology Division, School of Computer Science and Informatics, Korean Advanced Institute of Science University College Dublin, and Technology, Yuseong, South Korea Belfield, Dublin D2, Ireland. [email protected] [email protected] Abstract Just as observing is more than just seeing, comparing is far more than mere matching. It takes understanding, and even inventiveness, to discern a useful basis for judging two ideas as similar in a particular context, especially when our perspective is shaped by an act of linguistic creativity such as metaphor, simile or analogy. Structured resources such as WordNet offer a convenient hierarchical means for converging on a common ground for comparison, but offer little support for the divergent thinking that is needed to creatively view one concept as another. We describe such a means here, by showing how the web can be used to harvest many divergent views for many familiar ideas. These lateral views complement the vertical views of WordNet, and support a system for idea exploration called Thesaurus Rex. We show also how Thesaurus Rex supports a novel, generative similarity measure for WordNet. 1 Seeing is Believing (and Creating) Similarity is a cognitive phenomenon that is both complex and subjective, yet for practical reasons it is often modeled as if it were simple and objective. This makes sense for the many situations where we want to align our similarity judgments with those of others, and thus focus on the same conventional properties that others are also likely to focus upon. This reliance on the consensus viewpoint explains why WordNet (Fellbaum, 1998) has proven so useful as a basis for computational measures of lexico-semantic similarity (e.g. see Pederson et al. 2004, Budanitsky & Hirst, 2006; Seco et al. 2006). These measures reduce the similarity of two lexical concepts to a single number, by viewing similarity as an objective estimate of the overlap in their salient qualities. This convenient perspective is poorly suited to creative or insightful comparisons, but it is sufficient for the many mundane comparisons we often perform in daily life, such as when we organize books or look for items in a supermarket. So if we do not know in which aisle to locate a given item (such as oatmeal), we may tacitly know how to locate a similar product (such as cornflakes) and orient ourselves accordingly. Yet there are occasions when the recognition of similarities spurs the creation of similarities, when the act of comparison spurs us to invent new ways of looking at an idea. By placing pop tarts in the breakfast aisle, food manufacturers encourage us to view them as a breakfast food that is not dissimilar to oatmeal or cornflakes. When ex-PM Tony Blair published his memoirs, a mischievous activist encouraged others to move his book from Biography to Fiction in bookshops, in the hope that buyers would see it in a new light. Whenever we use a novel metaphor to convey a non-obvious viewpoint on a topic, such as “cigarettes are time bombs”, the comparison may spur us to insight, to see aspects of the topic that make it more similar to the vehicle (see Ortony, 1979; Veale & Hao, 2007). In formal terms, assume agent A has an insight about concept X, and uses the metaphor X is a Y to also provoke this insight in agent B. To arrive at this insight for itself, B must intuit what X and Y have in common. But this commonality is surely more than a standard categorization of X, or else it would not count as an insight about X. To understand the metaphor, B must place X 660 in a new category, so that X can be seen as more similar to Y. Metaphors shape the way we perceive the world by re-shaping the way we make similarity judgments. So if we want to imbue computers with the ability to make and to understand creative metaphors, we must first give them the ability to look beyond the narrow viewpoints of conventional resources. Any measure that models similarity as an objective function of a conventional worldview employs a convergent thought process. Using WordNet, for instance, a similarity measure can vertically converge on a common superordinate category of both inputs, and generate a single numeric result based on their distance to, and the information content of, this common generalization. So to find the most conventional ways of seeing a lexical concept, one simply ascends a narrowing concept hierarchy, using a process de Bono (1970) calls vertical thinking. To find novel, non-obvious and useful ways of looking at a lexical concept, one must use what Guilford (1967) calls divergent thinking and what de Bono calls lateral thinking. These processes cut across familiar category boundaries, to simultaneously place a concept in many different categories so that we can see it in many different ways. de Bono argues that vertical thinking is selective while lateral thinking is generative. Whereas vertical thinking concerns itself with the “right” way or a single “best” way of looking at things, lateral thinking focuses on producing alternatives to the status quo. To be as useful for creative tasks as they are for conventional tasks, we need to re-imagine our computational similarity measures as generative rather than selective, expansive rather than reductive, divergent as well as convergent and lateral as well as vertical. Though WordNet is ideally structured to support vertical, convergent reasoning, its comprehensive nature means it can also be used as a solid foundation for building a more lateral and divergent model of similarity. Here we will use the web as a source of diverse perspectives on familiar ideas, to complement the conventional and often narrow views codified by WordNet. Section 2 provides a brief overview of past work in the area of similarity measurement, before section 3 describes a simple bootstrapping loop for acquiring richly diverse perspectives from the web for a wide variety of familiar ideas. These perspectives are used to enhance a WordNet-based measure of lexico-semantic similarity in section 4, by broadening the range of informative viewpoints the measure can select from. Similarity is thus modeled as a process that is both generative and selective. This lateral-andvertical approach is evaluated in section 5, on the Miller & Charles (1991) data-set. A web app for the lateral exploration of diverse viewpoints, named Thesaurus Rex, is also presented, before closing remarks are offered in section 6. 2 Related Work and Ideas WordNet’s taxonomic organization of nounsenses and verb-senses – in which very general categories are successively divided into increasingly informative sub-categories or instancelevel ideas – allows us to gauge the overlap in information content, and thus of meaning, of two lexical concepts. We need only identify the deepest point in the taxonomy at which this content starts to diverge. This point of divergence is often called the LCS, or least common subsumer, of two concepts (Pederson et al., 2004). Since sub-categories add new properties to those they inherit from their parents – Aristotle called these properties the differentia that stop a category system from trivially collapsing into itself – the depth of a lexical concept in a taxonomy is an intuitive proxy for its information content. Wu & Palmer (1994) use the depth of a lexical concept in the WordNet hierarchy as such a proxy, and thereby estimate the similarity of two lexical concepts as twice the depth of their LCS divided by the sum of their individual depths. Leacock and Chodorow (1998) instead use the length of the shortest path between two concepts as a proxy for the conceptual distance between them. To connect any two ideas in a hierarchical system, one must vertically ascend the hierarchy from one concept, change direction at a potential LCS, and then descend the hierarchy to reach the second concept. (Aristotle was also first to suggest this approach in his Poetics). Leacock and Chodorow normalize the length of this path by dividing its size (in nodes) by twice the depth of the deepest concept in the hierarchy; the latter is an upper bound on the distance between any two concepts in the hierarchy. Negating the log of this normalized length yields a corresponding similarity score. While the role of an LCS is merely implied in Leacock and Chodorow’s use of a shortest path, the LCS is pivotal nonetheless, and like that of Wu & Palmer, the approach uses an essentially vertical reasoning process to identify a single “best” generalization. Depth is a convenient proxy for information content, but more nuanced proxies can yield 661 more rounded similarity measures. Resnick (1995) draws on information theory to define the information content of a lexical concept as the negative log likelihood of its occurrence in a corpus, either explicitly (via a direct mention) or by presupposition (via a mention of any of its sub-categories or instances). Since the likelihood of a general category occurring in a corpus is higher than that of any of its sub-categories or instances, such categories are more predictable, and less informative, than rarer categories whose occurrences are less predictable and thus more informative. The negative log likelihood of the most informative LCS of two lexical concepts offers a reliable estimate of the amount of information shared by those concepts, and thus a good estimate of their similarity. Lin (1998) combines the intuitions behind Resnick’s metric and that of Wu and Palmer to estimate the similarity of two lexical concepts as an information ratio: twice the information content of their LCS divided by the sum of their individual information contents. Jiang and Conrath (1997) consider the converse notion of dissimilarity, noting that two lexical concepts are dissimilar to the extent that each contains information that is not shared by the other. So if the information content of their most informative LCS is a good measure of what they do share, then the sum of their individual information contents, minus twice the content of their most informative LCS, is a reliable estimate of their dissimilarity. Seco et al. (2006) presents a minor innovation, showing how Resnick’s notion of information content can be calculated without the use of an external corpus. Rather, when using Resnick’s metric (or that of Lin, or Jiang and Conrath) for measuring the similarity of lexical concepts in WordNet, one can use the category structure of WordNet itself to estimate information content. Typically, the more general a concept, the more descendants it will possess. Seco et al. thus estimate the information content of a lexical concept as the log of the sum of all its unique descendants (both direct and indirect), divided by the log of the total number of concepts in the entire hierarchy. Not only is this intrinsic view of information content convenient to use, without recourse to an external corpus, Seco et al. show that it offers a better estimate of information content than its extrinsic, corpus-based alternatives, as measured relative to average human similarity ratings for the 30 word-pairs in the Miller & Charles (1991) test set. A similarity measure can draw on other sources of information besides WordNet’s category structures. One might eke out additional information from WordNet’s textual glosses, as in Lesk (1986), or use category structures other than those offered by WordNet. Looking beyond WordNet, entries in the online encyclopedia Wikipedia are not only connected by a dense topology of lateral links, they are also organized by a rich hierarchy of overlapping categories. Strube and Ponzetto (2006) show how Wikipedia can support a measure of similarity (and relatedness) that better approximates human judgments than many WordNet-based measures. Nonetheless, WordNet can be a valuable component of a hybrid measure, and Agirre et al. (2009) use an SVM (support vector machine) to combine information from WordNet with information harvested from the web. Their best similarity measure achieves a remarkable 0.93 correlation with human judgments on the Miller & Charles word-pair set. Similarity is not always applied to pairs of concepts; it is sometimes analogically applied to pairs of pairs of concepts, as in proportional analogies of the form A is to B as C is to D (e.g., hacks are to writers as mercenaries are to soldiers, or chisels are to sculptors as scalpels are to surgeons). In such analogies, one is really assessing the similarity of the unstated relationship between each pair of concepts: thus, mercenaries are soldiers whose allegiance is paid for, much as hacks are writers with income-driven loyalties; sculptors use chisels to carve stone, while surgeons use scalpels to cut or carve flesh. Veale (2004) used WordNet to assess the similarity of A:B to C:D as a function of the combined similarity of A to C and of B to D. In contrast, Turney (2005) used the web to pursue a more divergent course, to represent the tacit relationships of A to B and of C to D as points in a highdimensional space. The dimensions of this space initially correspond to linking phrases on the web, before these dimensions are significantly reduced using singular value decomposition. In the infamous SAT test, an analogy A:B::C:D has four other pairs of concepts that serve as likely distractors (e.g. singer:songwriter for hack:writer) and the goal is to choose the most appropriate C:D pair for a given A:B pairing. Using variants of Wu and Palmer (1994) on the 374 SAT analogies of Turney (2005), Veale (2004) reports a success rate of 38–44% using only WordNet-based similarity. In contrast, Turney (2005) reports up to 55% success on the same analogies, partly because his approach aims 662 to match implicit relations rather than explicit concepts, and in part because it uses a divergent process to gather from the web as rich a perspective as it can on these latent relationships. 2.1 Clever Comparisons Create Similarity Each of these approaches to similarity is a user of information, rather than a creator, and each fails to capture how a creative comparison (such as a metaphor) can spur a listener to view a topic from an atypical perspective. Camac & Glucksberg (1984) provide experimental evidence for the claim that “metaphors do not use preexisting associations to achieve their effects […] people use metaphors to create new relations between concepts.” They also offer a salutary reminder of an often overlooked fact: every comparison exploits information, but each is also a source of new information in its own right. Thus, “this cola is acid” reveals a different perspective on cola (e.g. as a corrosive substance or an irritating food) than “this acid is cola” highlights for acid (such as e.g., a familiar substance) Veale & Keane (1994) model the role of similarity in realizing the long-term perlocutionary effect of an informative comparison. For example, to compare surgeons to butchers is to encourage one to see all surgeons as more bloody, crude or careless. The reverse comparison, of butchers to surgeons, encourages one to see butchers as more skilled and precise. Veale & Keane present a network model of memory, called Sapper, in which activation can spread between related concepts, thus allowing one concept to prime the properties of a neighbor. To interpret an analogy, Sapper lays down new activation-carrying bridges in memory between analogical counterparts, such as between surgeon & butcher, flesh & meat, and scalpel & cleaver. Comparisons can thus have lasting effects on how Sapper sees the world, changing the pattern of activation that arises when it primes a concept. Veale (2003) adopts a similarly dynamic view of similarity in WordNet, showing how an analogical comparison can result in the automatic addition of new categories and relations to WordNet itself. Veale considers the problem of finding an analogical mapping between different parts of WordNet’s noun-sense hierarchy, such as between instances of Greek god and Norse god, or between the letters of different alphabets, such as of Greek and Hebrew. But no structural similarity measure for WordNet exhibits enough discernment to e.g. assign a higher similarity to Zeus & Odin (each is the supreme deity of its pantheon) than to a pairing of Zeus and any other Norse god, just as no structural measure will assign a higher similarity to Alpha & Aleph or to Beta & Beth than to any random letter pairing. A fine-grained category hierarchy permits fine-grained similarity judgments, and though WordNet is useful, its sense hierarchies are not especially fine-grained. However, we can automatically make WordNet subtler and more discerning, by adding new fine-grained categories to unite lexical concepts whose similarity is not reflected by any existing categories. Veale (2003) shows how a property that is found in the glosses of two lexical concepts, of the same depth, can be combined with their LCS to yield a new fine-grained parent category, so e.g. “supreme” + deity = Supreme-deity (for Odin, Zeus, Jupiter, etc.) and “1st” + letter = 1st-letter (for Alpha, Aleph, etc.) Selected aspects of the textual similarity of two WordNet glosses – the key to similarity in Lesk (1986) – can thus be reified into an explicitly categorical WordNet form. 3 Divergent (Re)Categorization To tap into a richer source of concept properties than WordNet’s glosses, we can use web ngrams. Consider these descriptions of a cowboy from the Google n-grams (Brants & Franz, 2006). The numbers to the right are Google frequency counts. a lonesome cowboy 432 a mounted cowboy 122 a grizzled cowboy 74 a swaggering cowboy 68 To find the stable properties that can underpin a meaningful fine-grained category for cowboy, we must seek out the properties that are so often presupposed to be salient of all cowboys that one can use them to anchor a simile, such as "swaggering like a cowboy” or “as grizzled as a cowboy”. So for each property P suggested by Google n-grams for a lexical concept C, we generate a like-simile for verbal behaviors such as swaggering and an as-as-simile for adjectives such as lonesome. Each is then dispatched to Google as a phrasal query. We value quality over size, as these similes will later be used to find diverse viewpoints on the web via bootstrapping. We thus manually filter each web simile, to weed out any that are ill-formed, and those intended to be seen as ironic by their authors. This gives us a body of 12,000+ valid web similes. 663 Veale (2011, 2012, 2013) notes that web uses of the pattern “as P as C” are rife with irony. In contrast, web instances of “P S such as C” – where S denotes a superordinate of C – are rarely ironic. Hao & Veale (2010) exploit this fact to filter ironic comparisons from web similes, by re-expressing each “as P as C” simile as “P * such as C” (using a wildcard * to match any values for S) and looking for attested uses of this new form on the web. Since each hit will also yield a value for S via the wildcard *, and a finegrained category P-S for C, we use this approach here to harvest fine-grained categories from the web from most of our similes. Once C is seen to be an exemplary member of the category P-S, such as cola in fizzy-drink, a targeted web search is used to find other members of P-S, via the anchored query “P S such as * and C”. For example, “fizzy drinks such as * and cola” will retrieve web texts in which * is matched to soda or lemonade. Each new member can then be used to instantiate a further query, as in “fizzy drinks such as * and soda”, to retrieve other members of P-S, such as champagne and root beer. This bootstrapping process runs in successive cycles, using doubly-anchored patterns that – following Kozareva et al. (2008) and Veale et al. (2009) – explicitly mention both the category to be populated (P-S) and a recently acquired member of this category (C). As cautioned by Kozareva et al., it is reckless to bootstrap from members to categories to members again if each enfilade of queries is likely to return noisy results. A reliable filter must be applied at each stage, to ensure that any member C that is placed in a category P-S is a sensible member of the category S. Only by filtering in this way can we stop the rapid accumulation of noise. For instance, a WordNet-based filter discards any categorization “P S such as X and C” where X does not denote a WordNet entry for which S does not denote a valid hypernym. Such a filter offers no creative latitude, however, since it forces every pairing of C and P-S to precisely obey WordNet’s category hierarchy. We thus use instead the near-miss filter described in Veale et al. (2009), in which X must denote a descendant of some direct hypernym of some sense of S. The filter does not (and cannot) determine whether P is salient for X. It merely assumes that if P is salient for C, it is salient for X. Five successive cycles of bootstrapping are performed, using the 12,000+ web similes as a starting point. Consider cola: after 1 cycle, we acquire 14 new categories, such as effervescentbeverage and sweet-beverage. After 2 cycles we acquire 43 categories; after 3 cycles, 72; after 4 cycles, 93; and after 5 cycles, we acquire 102 fine-grained perspectives on cola, such as stimulating-drink and corrosive-substance. Figure 1. Fine-grained perspectives for cola found by Thesaurus Rex on the web. See also Figures 3 and 4. These alternative viewpoints, for a broad array of concepts, are gleaned from the collective intelligence of the web. Some are more discerning and informative than others – see for instance war & divorce in Figure 4 – though as de Bono (1971) notes, lateral thinking does not privilege a narrow set of “correct” viewpoints, rather it generates a broad array of interesting alternatives, none of which are ever “wrong”, even if some prove more useful than others in a given context. 4 Measuring and Creating Similarity Which perspectives will be most useful and informative to a WordNet-based similarity metric? Simply, a perspective M-Cx for a concept Cy can be coherently added to WordNet iff Cx denotes a hypernym of some sense of Cy in WordNet. For purposes of quantifying the similarity of two terms t1 and t2 – by finding the WordNet senses of these terms that exhibit the highest similarity – we can augment WordNet with the perspectives on t1 and t2 that are coherent with WordNet’s hierarchy. So for t1=cola & t2=acid, corrosive-substance offers a coherent new perspective on each, slotting in beneath the matching WordNet sense of substance. A category system is a structured feature space. We estimate the similarity of C1 and C2 as the cosine of the angle between the feature vectors that are constructed for each. The dimensions of these vectors are the atomic hypernyms (direct or indirect) of C1 and C2 in WordNet; the value of a dimension H in a vector is the information content (IC) of the WordNet hypernym H: 664 size(H) Σc ∈ WN size(c)) Here size(H) is the total number of lexical concepts in category H in WordNet, excluding any instance-level concepts, as these illustrative individuals are not evenly distributed across WordNet categories. We also want any fine-grained perspective MH to influence our similarity metric, provided it can be coherently tied into WordNet as a shared hypernym of the two lexical concepts being compared. The absolute information content of a category M-H that is newly added to WordNet is given by (2): size(M-H) Σm-h ∈ WN size(m-h)) where size(M-H) is the number of lexical concepts in WordNet for which M-H can be added as a new hypernym. The denominator in (2) denotes the sum total of the size of all fine-grained categories that can be coherently added to WordNet for any term. The IC of M-H relative to H is estimated via the geometric mean of ICabs(M-H) and IC(H) is given by (3): (3) IC(M-H) = √ ICabs(M-H) . IC(H) For a shared dimension H in the feature vectors of concepts C1 and C2, if at least one fine-grained perspective M-H has been added to WordNet between H and C1 and between H and C2, then the value of dimension H for C1 and for C2 is given by (4): (4) weight(H) = max(IC(H), maxM IC(M-H)) When no shared perspective M-H can be added under H, then weight(H) = IC(H). A fine-grained perspective M-H will thus influence a similarity judgment between C1 and C2 only if M-H can be coherently added to WordNet as a hypernym of C1 and C2, and if M-H enriches our view of H. Unlike Resnick (1995), Lin (1998) and Seco et al. (2006), this vector-space approach does not hinge on the information content of a single LCS, so any shared hypernym H or perspective M-H can shape a similarity judgment according to its informativeness. 5 Empirical Evaluation Many fascinating perspectives on familiar ideas are bootstrapped from the web using similes as a starting point. These perspectives drive an exploratory web-aid to lateral thinking we call Thesaurus Rex, while the cosine-distance metric constructed from WordNet and these many finegrained categories is called, simply, Rex. When Rex provides a numeric estimate of similarity for two ideas, Thesaurus Rex provides an enhanced insight into why these ideas are similar, e.g. by explaining that cola & acid are not just substances, they are corrosive substances. We evaluate Rex by estimating how closely its judgments correlate with those of human judges on the 30-pair word set of Miller & Charles (M&C), who aggregated the judgments of multiple human raters into mean ratings for these pairs. We evaluate three variants of Rex on M&C: Rex-lat, which combines WordNet with all of Thesaurus Rex; Rex-wn, which uses only WordNet, with nothing at all from Thesaurus Rex; and Rex-pop, which enriches WordNet with only popular perspectives from Thesaurus Rex. A perspective is considered popular if it is discovered 5 or more times in the bootstrapping process, using 5 different anchors. While corrosive-substance is a popular category for acid, it not so for cola or juice. Popularity thus approximates what Ortony (1979) calls salience. Similarity metric r Similarity metric r Wu & Palmer’94* .74 Seco et al. ‘06* .84 Resnick ‘95* .77 Agirre et al. ‘09 .93 Leacock/Chod’98* .82 Han et al.’09 .856 Lin ‘98* .80 Rex-wn .84 Jiang/Conrath ‘97* -.81 Rex-lat .89 Li et al. ‘03 .89 Rex-pop .93 Table 1. Product-moment correlations (Pearson’s r) with mean human ratings on all 30 word pairs of the Miller & Charles similarity data-set. * As re-evaluated by Seco et al. (2006) for all 30 pairs Table 1 lists coefficients of correlation (Pearson’s r) with mean human ratings for a range of WordNet-based metrics. Table 1 includes the hybrid WordNet+web+SVM metric of Agirre et al. (2009) – who report a correlation of .93 – and the Mutual-Information-based PMImax metric of Han et al. (2009). The latter achieves good results for 27 of the 30 M&C pairs by enriching a PMI metric with an automatically-generated thesaurus. Yet while informative, this thesaurus is ( ) ( ) (2) ICabs(M-H) = -log (1) IC(H) = - log 665 not organized as an explanatory system of hierarchical categories as it is in Thesaurus Rex. Rex-wn does no better than Seco et al. (2006) on the M&C dataset, suggesting that Rex’s vectors of IC-weighted hypernyms are no more discerning than a single informative LCS. However, such vectors also permit Rex to incorporate additional, fine-grained perspectives from Thesaurus Rex, allowing Rex-lat in turn to achieve a comparable correlation to that of Li et al. (2003) – .89. Yet the formulation in (2) favors unusual or idiosyncratic perspectives that are unlikely to generalize across independent judges. The mean ratings of M&C are the stuff of consensus, not individual creativity, and outside the realm of creative metaphor it often makes sense to safely align our judgments with those of others. By limiting its use of Thesaurus Rex to the perspectives that other judges are most likely to use, Rex-pop obtains a correlation of .93 with mean human ratings on all 30 M&C pairs. This result is comparable to that reported by Agirre et al. (2009), who use SVM-based supervised learning to combine the judgments of two metrics, one based on WordNet and another on the analysis of web contexts of both input terms. However, Rex has the greater capacity for insight, since it augments the structured category system of WordNet with structured categories of its own. At each level of the WordNet hierarchy, Rex finds the fine-grained category that can best inform its judgments. Because Rex makes highly selective use of the diverse products of lateral thinking, this selectivity also produces concise explanations for its judgments. 5.1 Generative Uses of Similarity A similarity metric offers a numerical measure of how closely one idea can cluster with another. It can also indicate how well one object may serve as a substitute for another, as when a letter opener is used as a knife, or tofu is used instead of meat. This need for substitution can be grist for creativity, yet most similarity metrics can only assess a suggested substitution, rather than suggest one for themselves. If they are to actively shape a creative decision, our similarity metrics must be made more generative. A similarity metric can learn to be generative, by observing how people typically cluster words and ideas that are made similar by their contexts of use. The Google 3-grams contain many instances of the clustering pattern “X+s and Y+s”, as in “cowboys and pirates” or “doctors and lawyers”, and so a comprehensive trawl yields many insights into the pairings of ideas that we implicitly see as comparable. We harvest all such Google 3-grams, to build a symmetric comparability graph in which any two comparable terms are adjacent nodes. For any node, we can generate a diverse set of comparable ideas just by reading off its adjacent nodes. Thesaurus Rex can be used to find an embracing category for many such pairs of nodes, while Rex estimates the similarity of any two adjacent nodes. A comparability graph of 28,000 nodes is produced from the Google 3-grams, with a sparse adjacency matrix of just 1,264,827 (0.16%) non-zero entries. Is this dense enough for a task requiring generative similarity? Almuhareb & Poesio (2004) describe one such task: they sample 214 words from across 13 WordNet categories, and ask if these 214 words can be partitioned into 13 clusters that mirror the WordNet categories from which they were drawn. They then collect tens of thousands of web contexts for these 214 words, to extract a feature representation of each. We instead use Rex to generate, as features, a diverse set of comparable terms for each word. (We also assume that each word is a feature of itself). The Rex comparability graph suggests a pool of 8,300 features for all 214 words. The clustering toolkit CLUTO is used to partition the original 214 words into 13 clusters guided only by these comparability features. The resulting 13 clusters have an average purity of 93.4% relative to WordNet, suggesting that categorization tasks which require implicit comparability judgments are well served by a generative approach to similarity. 5.2 Learning From Similarity Judgments Rex augments the narrow worldview of WordNet with the more diverse viewpoints it gleans from the web, not by viewing them as separate knowledge sources, but by actually updating WordNet itself. The relative performance of Rex-pop > Rex-lat > Rex-wn on the M&C dataset shows that selective use of a divergent perspective permits WordNet to better serve its popular role as a judge of similarity. It is worth asking then whether these passing additions to WordNet should not be made permanent. Rex estimates a similarity score for each of the 1,264,827 pairings of comparable terms it finds in the Google 3-grams. These scores are then cached to support generative similarity, and to permit fast lookup of scores for common comparisons. This lookup table is a lightweight means of using Rex in a range of creative substitution or generation tasks. Though the table is 666 sparse, §5.1 shows that it implicitly captures key nuances of category structure. The 39,826 unique fine-grained categories added by Rex-pop (versus the 44,238 categories added by Rex-lat) in the course of its 1,264,827 comparisons thus suggest credible enhancements to WordNet. Figure 2 graphs the distribution of new categories and their membership sizes when Rex-pop is used on this scale. Figure 2. The number of new categories (Y-axis) with a given membership size (X-axis) added to WordNet when Rex-pop/lat are used on a large, web scale. The Goldilocks categories are those that are not so small as to lack generality, and not so large as to lack information content. For example, Rexpop suggests the addition of 15,125 new finegrained categories to WordNet with membership sizes ranging from 5 to 25. This is a large but manageable number of categories that should be further considered for future addition to WordNet, or indeed to any similarly curated knowledge resource. 6 Summary and Conclusions de Bono (1970) argues that the best solutions arise from using lateral and vertical thinking in unison. Lateral thinking is divergent and generative, while vertical thinking is convergent and analytical. The former can thus be used to create a pool of interesting candidates for the latter to selectively consider. Thesaurus Rex uses the web to generate a rich pool of alternate perspectives on familiar ideas, and Rex selects from this pool to perform vertical reasoning with WordNet to yield precise similarity judgments. Rex also uses the most informative perspective to concisely explain each comparison, or – when used in generative mode – to suggest a creative comparison. For instance, to highlight the potential toxicity of coffee, Thesaurus Rex suggests comparisons with alcohol, tobacco or pesticide, as all have been categorized as toxic substances on the web. A web app based on Thesaurus Rex, to support this kind of lateral thinking, is accessible online at this URL: http://boundinanutshell.com/therex2 Screenshots from the Thesaurus Rex application are provided in Figures 3 and 4 overleaf. Because Thesaurus Rex targets the acquisition of fine-grained perspectives, ranging from the offbeat to the obvious, it acquires an order-ofmagnitude more categories from the web than can be found in WordNet itself. Rex dips selectively into this wealth of perspectives (and Rexpop is more selective still), though many of Rex’s needs can be anticipated by looking to how ideas are implicitly grouped into ad-hoc categories (Barsalou, 1983) in constructions such as “X+s and Y+s”. Using the Google n-grams as a source of tacit grouping constructions, we have created a comprehensive lookup table that provides Rex similarity scores for the most common (if often implicit) comparisons. Comparability is not the same as similarity, and a non-zero similarity score does not mean that two concepts would ever be considered comparable by a human. This poses a problem for the generation of sensible comparisons. However, Rex’s lookup table captures the implicit pragmatics of comparability, making Rex usable in generative tasks where a metric must both suggest and evaluate comparisons. Human similarity mechanisms are evaluative and generative, convergent and divergent. Our computational mechanisms should be no less so. 7 Acknowledgements This research was partly supported by the WCU (World Class University) program under the National Research Foundation of Korea (Ministry of Education, Science and Technology of Korea, Project no. R31-30007) and partly funded by Science Foundation Ireland via the Centre for Next Generation Localization (CNGL). 667 Figure 3. A screenshot from the web application Thesaurus Rex, showing the fine-grained categories found by Thesaurus Rex for the lexical concept creativity on the web. Figure 4. A screenshot from the web application Thesaurus Rex, showing the shared overlapping categories found by Thesaurus Rex for the lexical concepts divorce and war. 668 References Aristotle (translator: James Hutton). 1982. Aristotle’s Poetics. New York: Norton. Eneko Agirre, Enrique Alfonseca, Keith Hall, Jana Kravalova, Marius Pasca and Aitor Soroa. 2009. Study on Similarity and Relatedness Using Distributional and WordNet-based Approaches. In Proceedings of NAACL '09, The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pp. 19—27. Abdulrahman Almuhareb and Massimo Poesio. 2004. Attribute-Based and Value-Based Clustering: An Evaluation. In Proceedings of the Conference on Empirical Methods in NLP, Barcelona. pp. 158165. Lawrence W. Barsalou. 1983. Ad hoc categories. Memory and Cognition, 11:211–227. Thorsten Brants and Alex Franz. 2006. Web 1T 5gram Ver. 1. Philadelphia: Linguistic Data Consortium. Alexander Budanitsky and Graeme Hirst. 2006. Evaluating WordNet-based Measures of Lexical Semantic Relatedness. Computational Linguistics, 32(1):13-47. Mary K. Camac, and Sam Glucksberg. 1984. Metaphors do not use associations between concepts, they are used to create them. Journal of Psycholinguistic Research, 13, 443-455. de Bono, Edward. 1970. Lateral thinking: creativity step by step. New York: Harper & Row. de Bono, Edward. 1971. Lateral thinking for management: a handbook for creativity. New York: McGraw Hill. Christiane Fellbaum (ed.). 1998. WordNet: An Electronic Lexical Database. MIT Press, Cambridge, MA. J. Paul Guilford. 1967. The Nature of Human Intelligence. New York: McGraw Hill. Lushan Han, Tim Finin, Paul McNamee, Anupam Joshi and Yelena Yesha. 2012. Improving Word Similarity by Augmenting PMI with Estimates of Word Polysemy. IEEE Transactions on Data and Knowledge Engineering (13 Feb. 2012). Yanfen Hao and Tony Veale. 2010. An Ironic Fist in a Velvet Glove: Creative Mis-Representation in the Construction of Ironic Similes. Minds and Machines 20(4), pp. 635–650. Jay Y. Jiang and David W. Conrath. 1997. Semantic similarity based on corpus statistics and lexical taxonomy. In Proceedings of the 10th International Conference on Research in Computational Linguistics, pp. 19-33. Zornitsa Kozareva, Eileen Riloff and Eduard Hovy. 2008. Semantic Class Learning from the Web with Hyponym Pattern Linkage Graphs. In Proc. of the 46th Annual Meeting of the ACL, pp 1048-1056. Claudia Leacock and Martin Chodorow. 1998. Combining local context and WordNet similarity for word sense identification. In Fellbaum, C. (ed.), WordNet: An Electronic Lexical Database, 265– 283. Yuhua Li, Zuhair A. Bandar and David McLean. 2003. An Approach for Measuring Semantic Similarity between Words Using Multiple Information Sources. IEEE Transactions on Knowledge and Data Engineering, vol. 15, no. 4, pp. 871-882. Dekang Lin. 1998. An information-theoretic definition of similarity. In Proceedings of the 15th ICML, the International Conference on Machine Learning, Morgan Kaufmann, San Francisco CA, pp. 296– 304. Michael Lesk. 1986 Automatic sense disambiguation using machine readable dictionaries: how to tell a pine cone from an ice cream cone. In Proceedings of ACM SigDoc, ACM, 24–26. George A. Miller and Walter. G. Charles. 1991. Contextual correlates of semantic similarity. Language and Cognitive Processes 6(1):1-28. Andrew Ortony. 1979. Beyond literal similarity. Psychological Review, 86, pp. 161-180. Ted Pederson, Siddarth Patwardhan and Jason Michelizzi. 2004. WordNet::Similarity: measuring the relatedness of concepts. In Proceedings of HLT-NAACL’04 (Demonstration Papers) the 2004 annual Conference of the North American Chapter of the Association for Computational Linguistics, pp. 38-41. Philip Resnick. 1995. Using Information Content to Evaluate Semantic Similarity in a Taxonomy. In Proceedings of IJCAI’95, the 14th International Joint Conference on Artificial Intelligence. Nuno Seco, Tony Veale and Jer Hayes, 2004. An Intrinsic Information Content Metric for Semantic Similarity in WordNet. In Proceedings of ECAI’04, the European Conference on Artificial Intelligence. Michael Strube and Simone Paolo Ponzetto. 2006. WikiRelate! Computing Semantic Relatedness Using Wikipedia. In Proceedings of AAAI-06, the 2006 Conference of the Association for the Advancement of AI, pp. 1419–1424. Peter Turney. 2005. Measuring semantic similarity by latent relational analysis. Proceedings of the 19th International Joint Conference on Artificial Intelligence, 1136-1141. 669 Tony Veale and Mark T. Keane. 1994. Belief Modeling, Intentionality and Perlocution in Metaphor Comprehension. In Proceedings of the 16th Annual Meeting of the Cognitive Science Society, Atlanta, Georgia. Hillsdale, NJ: Lawrence Erlbaum. Tony Veale. 2003. The analogical thesaurus: An emerging application at the juncture of lexical metaphor and information retrieval. In Proceedings of IAAI’03, the 15th International Conference on Innovative Applications of Artificial Intelligence, Mexico. Tony Veale. 2004. WordNet sits the SAT: A knowledge-based approach to lexical analogy. Proceedings of ECAI'04, the European Conference on Artificial Intelligence, 606-612. Tony Veale and Yanfen Hao. 2007. Comprehending and Generating Apt Metaphors: A Web-driven, Case-based Approach to Figurative Language. In proceedings of AAAI 2007, the 22nd AAAI Conference on Artificial Intelligence. Vancouver, Canada. Tony Veale, Guofu Li and Yanfen Hao. 2009. Growing Finely-Discriminating Taxonomies from Seeds of Varying Quality and Size. In Proc. of EACL’09, the 12th Conference of the European Chapter of the Association for Computational Linguistics pp. 835842. Tony Veale. 2011. Creative Language Retrieval: A Robust Hybrid of Information Retrieval and Linguistic Creativity. In Proceedings of ACL’2011, the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Tony Veale. 2012. Exploding the Creativity Myth: The computational foundations of linguistic creativity. London: Bloomsbury Academic. Tony Veale. 2013. Humorous Similes. Humor: The International Journal of Humor Research, 21(1):322. Zhibiao Wu and Martha Palmer. 1994. Verb semantics and lexical selection. In Proceedings of ACL’94, 32nd annual meeting of the Association for Computational Linguistics, Las Cruces, New Mexico,. pp. 133-138. 670
2013
65
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 671–681, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Discovering User Interactions in Ideological Discussions Arjun Mukherjee Bing Liu Department of Computer Science University of Illinois at Chicago [email protected] [email protected] Abstract Online discussion forums are a popular platform for people to voice their opinions on any subject matter and to discuss or debate any issue of interest. In forums where users discuss social, political, or religious issues, there are often heated debates among users or participants. Existing research has studied mining of user stances or camps on certain issues, opposing perspectives, and contention points. In this paper, we focus on identifying the nature of interactions among user pairs. The central questions are: How does each pair of users interact with each other? Does the pair of users mostly agree or disagree? What is the lexicon that people often use to express agreement and disagreement? We present a topic model based approach to answer these questions. Since agreement and disagreement expressions are usually multiword phrases, we propose to employ a ranking method to identify highly relevant phrases prior to topic modeling. After modeling, we use the modeling results to classify the nature of interaction of each user pair. Our evaluation results using real-life discussion/debate posts demonstrate the effectiveness of the proposed techniques. 1 Introduction Online discussion/debate forums allow people with common interests to freely ask and answer questions, to express their views and opinions on any subject matter, and to discuss issues of common interest. A large part of such discussions is about social, political, and religious issues. On such issues, there are often heated discussions/debates, i.e., people agree or disagree and argue with one another. Such ideological discussions on a myriad of social and political issues have practical implications in the fields of communication and political science as they give social scientists an opportunity to study real-life discussions/debates of almost any issue and analyze participant behaviors in a large scale. In this paper, we present such an application, which aims to perform fine-grained analysis of user-interactions in online discussions. There have been some related works that focus on discovering the general topics and ideological perspectives in online discussions (Ahmed and Xing, 2010), placing users in support/oppose camps (Agarwal et al., 2003), and classifying user stances (Somasundaran and Wiebe, 2009). However, these works are at a rather coarser level and have not considered more fine-grained characteristics of debates/discussions where users interact with each other by quoting/replying each other to express agreement or disagreement and argue with one another. In this work, we want to mine the following information: 1. The nature of interaction of each pair of users or participants who have engaged in the discussion of certain issues, i.e., whether the two persons mostly agree or disagree with each other in their interactions. 2. What language expressions are often used to express agreement (e.g., “I agree” and “you’re right”) and disagreement (e.g., “I disagree” and “you speak nonsense”). We note that although agreement and disagreement expressions are distinct from traditional sentiment expressions (words and phrases) such as good, excellent, bad, and horrible, agreement and disagreement clearly express a kind of sentiment as well. They are usually emitted during interactive exchanges of arguments in ideological discussions. This idea prompted us to introduce the concept of ADsentiment. We define the polarity of agreement expressions as positive and the polarity of disagreement expressions as negative. We refer agreement and disagreement expressions as ADsentiment expressions, or AD-expressions for short. AD-expressions are crucial for the analysis of interactive discussions and debates just as sentiment expressions are instrumental in sentiment analysis (Liu, 2012). We thus regard this work as an extension to traditional sentiment 671 analysis (Pang and Lee, 2008; Liu, 2012). In our earlier work (Mukherjee and Liu, 2012a), we proposed three topic models to mine contention points, which also extract ADexpressions. In this paper, we further improve the work by coupling an information retrieval method to rank good candidate phrases with topic modeling in order to discover more accurate ADexpressions. Furthermore, we apply the resulting AD-expressions to the new task of classifying the arguing or interaction nature of each pair of users. Using discovered AD-expressions for classification has an important advantage over traditional classification because they are domain independent. We employ a semi-supervised generative model called JTE-P to jointly model AD-expressions, pair interactions, and discussion topics simultaneously in a single framework. With such complex interactions mined, we can produce many useful summaries of discussions. For example, we can discover the most contentious pairs for each topic and ideological camps of participants, i.e., people who often agree with each other are likely to belong to the same camp. The proposed framework also facilitates tracking users’ ideology shifts and the resulting arguing nature. The proposed methods have been evaluated both qualitatively and quantitatively using a large number of real-life discussion/debate posts from four domains. Experimental results show that the proposed model is highly effective in performing its tasks and outperforms several baselines. 2 Related Work There are several research areas that are related to our work. We compare with them below. Sentiment analysis: Sentiment analysis determines positive and negative opinions expressed on entities and aspects (Hu and Liu, 2004). Main tasks include aspect extraction (Hu and Liu, 2004; Popescu and Etzioni, 2005), polarity identification (Hassan and Radev, 2010; Choi and Cardie, 2010) and subjectivity analysis (Wiebe, 2000). As discussed earlier, agreement and disagreement are a special form of sentiments and are different from the sentiment studied in the mainstream research. Traditional sentiment is mainly expressed with sentiment terms (e.g., great and bad), while agreement and disagreement are inferred by AD-expressions (e.g., I agree and I disagree), which we also call AD-sentiment expressions. Thus, this work expands the sentiment analysis research. Topic models: Our work is also related to topic modeling and joint modeling of topics and other information as we jointly model several aspects of discussions/debates. Topic models like pLSA (Hofmann, 1999) and LDA (Blei et al., 2003) have proved to be very successful in mining topics from large text collections. There have been various extensions to multi-grain (Titov and McDonald, 2008), labeled (Ramage et al., 2009), and sequential (Du et al., 2010) topic models. Yet other approaches extend topic models to produce author specific topics (Rosen-Zvi et al., 2004), author persona (Mimno and McCallum, 2007), social roles (McCallum et al., 2007), etc. However, these models do not model debates and hence are unable to discover AD-expressions and interaction natures of author pairs. Also related are topic models in sentiment analysis which are often referred to as Aspect and Sentiment models (ASMs). ASMs come in two main flavors: Type-1 ASMs discover aspect (or topic) words sentiment-wise (i.e., discovering positive and negative topic words and sentiments for each topic without separating topic and sentiment terms) (e.g., Lin and He, 2009; Brody and Elhadad, 2010, Jo and Oh, 2011). Type-2 ASMs separately discover both aspects and sentiments (e.g., Mei et al., 2007; Zhao et al., 2010). Recently, domain knowledge induced ASMs have also been proposed (Mukherjee and Liu, 2012b; Chen et al., 2013). The generative process of ASMs is, however, different from our model. Specifically, Type-1 ASMs use asymmetric hyper-parameters for aspects while Type-2 assumes that sentiments and aspects are emitted in the same sentence. However, ADexpressions are emitted differently. They are mostly interleaved with users’ topical viewpoints and span different sentences. Further, we capture the key characteristic of discussions by encoding pair-wise user interactions. Existing models do not model pair interactions. In terms of discussions and comments, Yano et al., (2009) proposed the CommentLDA model which builds on the work of LinkLDA (Erosheva et al., 2004). Mukherjee and Liu (2012d) mined comment expressions. These works, however, don’t model pair interactions in debates. Support/oppose camp classification: Several works have attempted to put debate authors into support/oppose camps. Agrawal et al. (2003) used a graph based method. Murakami and Raymond (2010) used a rule-based method. In (Galley et al., 2004; Hillard et al., 2003), speaker 672 utterances were classified into agreement, disagreement and backchannel classes. Stances in online debates: Somasundaran and Wiebe (2009), Thomas et al. (2006), Bansal et al. (2008), Burfoot et al. (2011), and Anand et al. (2011) proposed methods to recognize stances in online debates. Some other research directions include subgroup detection (Abu-Jbara et al., 2012), tolerance analysis (Mukherjee et al., 2013), mining opposing perspectives (Lin and Hauptmann, 2006), linguistic accommodation (Mukherjee and Liu, 2012c), and contention point mining (Mukherjee and Liu, 2012a). For this work, we adopt the JTE-P model in (Mukherjee and Liu, 2012a), and make two major advances. We propose a new method to improve the AD-expression mining and a new task of classifying pair interaction nature to determine whether each pair of users who have interacted based on replying relations mostly agree or disagree with each other. 3 Model We now introduce the JTE-P model with additional details. JTE-P is a semi-supervised generative model motivated by the joint occurrence of expression types (agreement and disagreement), topics in discussion posts, and user pairwise interactions. Before proceeding, we make the following observation about online discussions. In a typical debate/discussion post, the user (author) mentions a few topics (using semantically related topical terms) and expresses some viewpoints with one or more ADexpression types (using agreement and disagreement expressions). AD-expressions are directed towards other user(s), which we call target(s). In this work, we focus on explicit mentions (i.e., using @name or quoting other authors’ posts). In our crawled dataset, 77% of all posts exhibit explicit quoting/reply-to relations excluding the first posts of threads which start the discussions and usually have nobody to quote/reply-to. Such author-target exchanges usually go back and forth between pairs of users populating a thread of discussion. The discussion topics and AD-expressions emitted are thus caused by the author-pairs’ topical interests and their nature of interaction (agreeing vs. disagreeing). In our discussion data obtained from Volconvo.com, we found that a pair of users typically exhibited a dominant arguing nature (agreeing vs. disagreeing) towards each other across various topics or threads. We believe this is because our data consists of topics like elections, theism, terrorism, vegetarianism, etc. which are often heated and attract people with pre-determined, strong, and polarized stances1. This observation motivates the generative process of our model. Referring to the notations in Table 1, we explain the generative process of JTE-P. Given a document (post) 𝑑, its author, 𝑎𝑑, and the list of targets to whom 𝑎𝑑 replies/quotes 1 These hardened perspectives are supported by theoretical studies in communications like the polarization effect (Sunstein, 2002), and the hostile media effect, a scenario where partisans rigidly hold on to their stances (Hansen and Hyunjung, 2011). Figure 1: JTE-P Model in plate notation. Variable/Function Description 𝑑; 𝑎𝑑 A document (post) 𝑑; author 𝑎 of document, 𝑑 𝑏𝑑= [𝑏1 … 𝑏𝑛] List of targets to whom 𝑎𝑑 replies/quotes in d. 𝑝= (𝑎, 𝑎′) Pair of two authors interacting by reply/quote. 𝜃𝑝𝑇; 𝜃𝑝𝐸(𝜃𝑝,𝐴𝑔 𝐸 , 𝜃𝑝,𝐷𝑖𝑠𝐴𝑔 𝐸 ) Pair 𝑝’s distribution over topics ; expression types (Agreement: 𝜃𝑝,𝐴𝑔 𝐸 , Disagreement: 𝜃𝑝,𝐷𝑖𝑠𝐴𝑔 𝐸 ) 𝜑𝑡 𝑇; 𝜑𝑒∈{𝐴𝑔,𝐷𝑖𝑠𝐴𝑔} 𝐸 Topic 𝑡’s ; Expression type 𝑒’s distribution over vocabulary terms 𝑇; 𝐸 Total number of topics; expression types 𝑉; 𝑃 Total number of vocabulary terms; pairs 𝑤𝑑,𝑗; 𝑁𝑑 𝑗𝑡ℎ term in 𝑑; Total # of terms in 𝑑 𝜓 𝑑,𝑗 Distribution over topics and ADexpressions 𝑥𝑑,𝑗 Associated feature context of the observed term 𝑤𝑑,𝑗 𝜆 Learned Max-Ent parameters 𝑟𝑑,𝑗∈{𝑡̂, 𝑒̂} Binary indicator/switch variable ( topic (𝑡̂) or AD-expression (𝑒̂) ) for 𝑤𝑑,𝑗 𝑧𝑑,𝑗 Topic/Expression type of 𝑤𝑑,𝑗 𝛼𝑇; 𝛼𝐸; 𝛽𝑇; 𝛽𝐸 Dirichlet priors of 𝜃𝑝𝑇; 𝜃𝑝𝐸; 𝜑𝑡 𝑇; 𝜑𝑒𝐸 𝑛𝑝,𝑡 𝑃𝑇; 𝑛𝑝,𝑒 𝑃𝐸 # of times topic 𝑡; expression type 𝑒 assigned to 𝑝 𝑛𝑡,𝑣 𝐶𝑇; 𝑛𝑒,𝑣 𝐶𝐸 # of times term 𝑣 appears in topic 𝑡; expression type 𝑒 Table 1: List of Notations φT T φE E βE βT αE θ E θ T P αT z r w c p D Nd ψ x λ w ad bd 673 in 𝑑, 𝑏𝑑= [𝑏1 … 𝑏𝑛] , the document 𝑑 exhibits shared topics and arguing nature of various pairs, 𝑝= (𝑎𝑑, 𝑐) , where 𝑐∈𝑏𝑑. More precisely, the pair specific topic and AD-expression distributions (𝜃𝑝𝑇; 𝜃𝑝𝐸) “shape” the topics and AD-expressions emitted in 𝑑 as agreement and disagreement on topical viewpoints are directed towards certain target authors. Each topic (𝜑𝑡 𝑇) and AD-expression type (𝜑𝑒 𝐸) is characterized by a multinomial distribution over terms (words/phrases). Assume we have 𝑡= 1 … 𝑇 topics and 𝑒= 1 … 𝐸 expression types in our corpus. Note that in our case of discussion/debate forums, we hypothesize 𝐸 = 2 as in debates, we mostly find two expression types: agreement and disagreement (more details in §6.1). Like most generative models for text, a post (document) is viewed as a bag of n-grams and each n-gram (word/phrase) takes one value from a predefined vocabulary. In this work, we use up to 4-grams, i.e., n = 1, 2, 3, 4. Instead of using all n-grams, a relevance based ranking method is proposed to select a subset of highly relevant n-grams for model building (details in §4). For notational convenience, we use terms to denote both words (unigrams) and phrases (n-grams). JTE-P is a switching graphical model (Ahmed and Xing, 2010; Zhao et al., 2010) performing a switch between AD-expressions and topics. 𝜓𝑑,𝑗 denotes the distribution over topics and ADexpressions with 𝑟𝑑,𝑗∈{𝑡̂, 𝑒̂} denoting the binary indicator/switch variable (topic or ADexpression) for the 𝑗th term of 𝑑, 𝑤𝑑,𝑗. To perform the switch we use a maximum entropy (Max-Ent) model. The idea is motivated by the observation that topical and AD-expression terms usually play different roles in a sentence. Topical terms (e.g., “elections” and “income tax”) tend to be noun and noun phrases while AD-expression terms (“I refute”, “how can you say”, and “probably agree”) usually contain pronouns, verbs, wh-determiners, and modals. In order to utilize the part-of-speech (POS) tag information, we place the topic/AD-expression distribution 𝜓𝑑,𝑗 (the prior over the indicator variable 𝑟𝑑,𝑗) in the term plate (see Figure 1) and set it from a Max-Ent model conditioned on the observed feature context 𝑥𝑑,𝑗 associated with 𝑤𝑑,𝑗 and the learned Max-Ent parameters, 𝜆 (details in §6.1). In this work, we use both lexical and POS features of the previous, current, and next POS tags/lexemes of the term 𝑤𝑑,𝑗 as the contextual information, i.e., 𝑥𝑑,𝑗= [𝑃𝑂𝑆𝑤𝑑,𝑗−1, 𝑃𝑂𝑆𝑤𝑑,𝑗, 𝑃𝑂𝑆𝑤𝑑,𝑗+1, 𝑤𝑑,𝑗−1, 𝑤𝑑,𝑗, 𝑤𝑑,𝑗+1], which is used to produce the feature functions for Max-Ent. For phrasal terms (n-grams), all POS tags and lexemes of 𝑤𝑑,𝑗 are considered as contextual information for computing feature functions in Max-Ent. We now detail the generative process of JTE-P (plate notation in Figure 1) as follows: 1. For each AD-expression type 𝑒, draw 𝜑𝑒𝐸~𝐷𝑖𝑟(𝛽𝐸) 2. For each topic 𝑡, draw 𝜑𝑡 𝑇~𝐷𝑖𝑟(𝛽𝑇) 3. For each pair 𝑝, draw 𝜃𝑝𝐸~𝐷𝑖𝑟(𝛼𝐸); 𝜃𝑝𝑇~𝐷𝑖𝑟(𝛼𝐸) 4. For each forum discussion post 𝑑∈{1 … 𝐷}: i. Given the author 𝑎𝑑 and the list of targets 𝑏𝑑, for each term 𝑤𝑑,𝑗, 𝑗∈{1 … 𝑁𝑑}: a. Draw a target 𝑐~𝑈𝑛𝑖(𝑏𝑑) b. Form pair 𝑝= (𝑎𝑑, 𝑐), 𝑐∈𝑏𝑑 c. Set 𝜓𝑑,𝑗←𝑀𝑎𝑥𝐸𝑛𝑡(𝑥𝑑,𝑗; 𝜆) d. Draw 𝑟𝑑,𝑗~𝐵𝑒𝑟𝑛(𝜓𝑑,𝑗) e. if (𝑟𝑑,𝑗= 𝑒̂) // 𝑤𝑑,𝑗 is an AD-expression term Draw 𝑧𝑑,𝑗~𝑀𝑢𝑙𝑡(𝜃𝑝𝐸) else // 𝑟𝑑,𝑗= 𝑡̂, 𝑤𝑑,𝑗 is a topical term Draw 𝑧𝑑,𝑗~𝑀𝑢𝑙𝑡(𝜃𝑝𝑇) f. Emit 𝑤𝑑,𝑗~𝑀𝑢𝑙𝑡(𝜑𝑧𝑑,𝑗 𝑟𝑑,𝑗) 𝐷𝑖𝑟, 𝑀𝑢𝑙𝑡, 𝐵𝑒𝑟𝑛, and 𝑈𝑛𝑖 correspond to the Dirichlet, Multinomial, Bernoulli, and Uniform distributions respectively. To learn JTE-P, we employ approximate posterior inference using Monte Carlo Gibbs sampling. Denoting the random variables {𝑤, 𝑧, 𝑝, 𝑟} associated with each term by singular subscripts {𝑤𝑘, 𝑧𝑘, 𝑝𝑘, 𝑟𝑘}, 𝑘1…𝐾, 𝐾= ∑𝑁𝑑 𝑑 , a single Gibbs sweep consists of performing the following sampling. 𝑝(𝑧𝑘= 𝑡, 𝑝𝑘= 𝑝, 𝑟𝑘= 𝑡̂| … ) ∝ 1 |𝑏𝑑| 𝑒𝑥𝑝൫∑ 𝜆𝑖𝑓𝑖൫𝑥𝑑,𝑗,𝑡̂൯ 𝑛 𝑖=1 ൯ ∑ 𝑒𝑥𝑝൫∑ 𝜆𝑖𝑓𝑖൫𝑥𝑑,𝑗,𝑦൯ 𝑛 𝑖=1 ൯ 𝑦∈{𝑒ෝ,𝑡෠} × 𝑛𝑝,𝑡 𝑃𝑇 ¬𝑘+𝛼𝑇 𝑛𝑝,(·) 𝑃𝑇 ¬𝑘+𝑇𝛼𝑇 𝑛𝑡,𝑣 𝐶𝑇 ¬𝑘+𝛽𝑇 𝑛𝑡,(·) 𝐶𝑇 ¬𝑘+𝑉𝛽𝑇 (1) 𝑝(𝑧𝑘= 𝑒, 𝑝𝑘= 𝑝, 𝑟𝑘= 𝑒̂| … ) ∝ 1 |𝑏𝑑| 𝑒𝑥𝑝൫∑ 𝜆𝑖𝑓𝑖൫𝑥𝑑,𝑗,𝑒̂൯ 𝑛 𝑖=1 ൯ ∑ 𝑒𝑥𝑝൫∑ 𝜆𝑖𝑓𝑖൫𝑥𝑑,𝑗,𝑦൯ 𝑛 𝑖=1 ൯ 𝑦∈{𝑒ෝ,𝑡෠} × 𝑛𝑝,𝑒 𝑃𝐸 ¬𝑘+𝛼𝐸 𝑛𝑝,(·) 𝑃𝐸 ¬𝑘+𝐸𝛼𝐸 𝑛𝑒,𝑣 𝐶𝐸 ¬𝑘+𝛽𝐸 𝑛𝑒,(·) 𝐶𝐸 ¬𝑘+𝑉𝛽𝐸 (2) Count variables 𝑛𝑡,𝑣 𝐶𝑇, 𝑛𝑒,𝑣 𝐶𝐸, 𝑛𝑝,𝑡 𝑃𝑇, and 𝑛𝑝,𝑒 𝑃𝐸 are detailed in Table 1. Omission of a latter index denoted by (·) represents the marginalized sum over the latter index. 𝑘= (𝑑, 𝑗) denotes the 𝑗th term of document 𝑑 and the subscript ¬𝑘 denotes the counts excluding the term at (𝑑, 𝑗). 𝜆1…𝑛 are the parameters of the learned Max-Ent model corresponding to the 𝑛 binary feature functions 𝑓1…𝑛 for Max-Ent. These learned Max-Ent 𝜆 parameters in conjunction with the observed feature context, 𝑥𝑑,𝑗 feed the supervision signal for topic/expression switch parameter, r which is updated during inference in equations (1) and (2). 674 4 Phrase Ranking based on Relevance We now detail our method of pre-processing ngrams (phrases) based on relevance to select a subset of highly relevant n-grams for model building. This has two advantages: (i). A large number of irrelevant n-grams slow inference. (ii). Filtering irrelevant terms in the vocabulary improves the quality of AD-expressions. Before proceeding, we review some existing approaches. Topics in most topic models like LDA are usually unigram distributions. This offers a great computational advantage compared to more complex models which consider word ordering (Wallach, 2006; Wang et al., 2007). This thread of research models bigrams by encoding them into the generative process. For each word, a topic is sampled first, then its status as a unigram or bigram is sampled, and finally the word is sampled from a topic-specific unigram or bigram distribution. This method, however, is expensive computationally and has a limitation for arbitrary length n-grams. In (Tomokiyo and Hurst, 2003), a language model approach is used for bigram phrase extraction. Yet another thread of research post-processes the discovered topical unigrams to form multiword phrases using likelihood scores (Blei and Lafferty, 2009). This approach considers adjacent word pairs and identifies n-grams which occur much more often than one would expect by chance alone by computing likelihood ratios. While this is reasonable, a significant n-gram with high likelihood score may not necessarily be relevant to the problem domain. For instance, in our case of discovering AD-expressions, the likelihood score 2 of 𝑝1 = “the government of” happens to be more than 𝑝2 = “I completely disagree”. Clearly, the former is irrelevant for the task of discovering AD-expressions. The reason for this is that likelihood scores or other statistical test scores rely on the relative counts in the multi-way contingency table to compute significance. Since the relative counts of different fragments of the irrelevant phrase 𝑝1, e.g. “the government”, and “government of”, happen to appear more than the corresponding counts in the contingency table of 𝑝2, the tests assign a higher score. This is nothing wrong per se because the statistical tests only judge significance of an ngram, but a significant n-gram may not necessarily be relevant in a given problem domain. 2 Computed using N-gram statistics package, NSP; http://ngram.sourceforge.net Thus, the existing approaches have some major shortcomings for our task. As our goal is to enhance the expressiveness of our models by considering relevant n-grams preserving the advantages of exchangeable modeling, we employ a pre-processing technique to rank ngrams based on relevance and consider certain number of top ranked n-grams based on coverage (details follow) in our vocabulary. The idea works as follows. We first induce a unigram JTE-P whereby we cluster the relevant AD-expression unigrams in 𝜑𝐴𝑔 𝐸 and 𝜑𝐷𝑖𝑠𝐴𝑔 𝐸 . Our notion of relevance of ADexpressions is already encoded into the model using priors set from Max-Ent. Next, we rank the candidate phrases (n-grams) using our probabilistic ranking function. The ranking function is grounded on the following hypothesis: a relevant phrase is one whose unigrams are closely related to (or appear with high probabilities in) the given AD-expression type, 𝑒: Agreement ( 𝐴𝑔) or disagreement (𝐷𝑖𝑠𝐴𝑔). Continuing from the previous example, given the expression type 𝜑𝑒=𝐷𝑖𝑠𝐴𝑔 𝐸 , 𝑝2 is relevant while 𝑝1 is not as “government” and “disagree” are highly unlikely and likely respectively to be clustered in 𝜑𝑒=𝐷𝑖𝑠𝐴𝑔 𝐸 . Thus, we want to rank phrases based on 𝑃(𝑅𝑒𝑙= 1|𝑒, 𝑝) where 𝑒 denotes the expression type (Agreement/Disagreement), 𝑝 denotes a candidate phrase. Following the probabilistic relevance model in (Lafferty and Zhai, 2003), we use a similar technique to that in (Zhao et al., 2011) for deriving our relevance ranking function as follows: 𝑃(𝑅𝑒𝑙= 1|𝑒, 𝑝) = 𝑃(𝑅𝑒𝑙=1|𝑒,𝑝) 𝑃(𝑅𝑒𝑙=0|𝑒,𝑝)+𝑃(𝑅𝑒𝑙=1|𝑒,𝑝) = 1 1+𝑃(𝑅𝑒𝑙=0|𝑒,𝑝) 𝑃(𝑅𝑒𝑙=1|𝑒,𝑝) = 1 1+𝑃(𝑅𝑒𝑙=0,𝑝| 𝑒) 𝑃(𝑅𝑒𝑙=1,𝑝|𝑒) = 1 1+[𝑃(𝑝|𝑅𝑒𝑙=0,𝑒)×𝑃(𝑅𝑒𝑙=0|𝑒)] [𝑃(𝑝|𝑅𝑒𝑙=1,𝑒)×𝑃(𝑅𝑒𝑙=1|𝑒)] (3) We further define 𝜀= 𝑃(𝑅𝑒𝑙=0|𝑒) 𝑃(𝑅𝑒𝑙=1|𝑒). Without loss of generality, one can say that 𝑃(𝑅𝑒𝑙= 0|𝑒) ≫ 𝑃(𝑅𝑒𝑙= 1|𝑒) , because there are many more irrelevant phrases than relevant ones, i.e., 𝜀≫1. Thus, taking log, from equation (3), we get, log 𝑃(𝑅𝑒𝑙= 1|𝑒, 𝑝) = log ቆ 1 1+𝜀×𝑃(𝑝|𝑅𝑒𝑙=0,𝑒) 𝑃(𝑝|𝑅𝑒𝑙=1,𝑒) ቇ≈ log ቀ 𝑃(𝑝|𝑅𝑒𝑙=1,𝑒) 𝑃(𝑝|𝑅𝑒𝑙=0,𝑒) × 1 𝜀ቁ= log ቀ 𝑃(𝑝|𝑅𝑒𝑙=1,𝑒) 𝑃(𝑝|𝑅𝑒𝑙=0,𝑒)ቁ−log 𝜀 (4) Thus, our ranking function actually computes the relevance score log ቀ 𝑃(𝑝|𝑅𝑒𝑙=1,𝑒) 𝑃(𝑝|𝑅𝑒𝑙=0,𝑒)ቁ. The last term, log 𝜀 being a constant is ignored because it cancels out while comparing candidate n-grams. 675 We now estimate the relevance score of a phrase 𝑝= (𝑤1, 𝑤2, … , 𝑤𝑛). Using the conditional independence assumption of words given the indicator variable 𝑅𝑒𝑙 and expression type 𝑒, we have: log ቀ 𝑃(𝑝|𝑅𝑒𝑙=1,𝑒) 𝑃(𝑝|𝑅𝑒𝑙=0,𝑒)ቁ= ∑ log 𝑃(𝑤𝑖|𝑅𝑒𝑙=1,𝑒) 𝑃(𝑤𝑖|𝑅𝑒𝑙=0,𝑒) 𝑛 𝑖=1 (5) Given the expression model 𝜑𝑒 𝐸 previously learned by inducing the unigram JTE-P, it is intuitive to set 𝑃(𝑤𝑖|𝑅𝑒𝑙= 1, 𝑒) to the point estimate of the posterior on 𝜑𝑒,𝑤𝑖 𝐸 = 𝑛𝑒,𝑤𝑖 𝐸𝑉+𝛽𝐸 𝑛𝑒,(·) 𝐸𝑉+𝑉𝛽𝐸, where 𝑛𝑒,𝑤𝑖 𝐸𝑉 is the number of times 𝑤𝑖 was assigned to AD-expression type 𝑒 and 𝑛𝑒,(·) 𝐸𝑉 denotes the marginalized sum over the latter index. On the other hand, 𝑃(𝑤𝑖|𝑅𝑒𝑙= 0, 𝑒) can be estimated using a Laplace smoothed ( 𝜇 = 1) background model, i.e., (𝑤𝑖|𝑅𝑒𝑙= 0, 𝑒) = 𝑛𝑤𝑖+𝜇 𝑛𝑉+𝑉𝜇 , where 𝑛𝑤𝑖 denotes the number of times 𝑤𝑖 appears in the whole corpus and 𝑛𝑉 denotes the number of terms in the entire corpus. Next, we throw light on the issue of choosing the number of top k phrases from the ranked candidate n-grams. Precisely, we want to analyze the coverage of our proposed ranking based on relevance models. By coverage, we mean that having selected top k candidate n-grams based on the proposed relevance ranking, we want to get an estimate of how many relevant terms from a sample of the collection were covered. To compute coverage, we randomly sampled 500 documents from the corpus and listed the candidate n-grams3 in the collection of sampled 500 documents. For this and subsequent human judgment tasks, we use two judges (graduate students well versed in English). We asked our judges to mark all relevant AD-expressions. Agreement study yielded κCohen = 0.77 showing substantial agreement according to scale 4 provided in (Landis and Koch, 1977). This is understandable as identifying AD-expressions is a relatively easy task. Finally, a term was considered to be relevant if both judges marked it so. We then computed the coverage to see how many of the relevant terms in the random sample were also present in top k phrases from the ranked candidate n-grams. We summarize the 3 These are terms appearing at least 20 times in the entire collection. We do this for computational reasons as there can be many n-grams and n-grams with very low frequency are less likely to be relevant. 4 No agreement (κ < 0), slight agreement (0 < κ ≤ 0.2), fair agreement (0.2 < κ ≤ 0.4), moderate agreement (0.4 < κ ≤ 0.6), substantial agreement (0.6 < κ ≤ 0.8), and almost perfect agreement 0.8 < κ ≤ 1.0. coverage results below in Table 2. k 3000 4000 5000 JTE-P Agreement 81.34 84.24 87.01 Disagreement 84.96 87.86 89.64 Table 2: Coverage (in %) of AD-expressions. We find that choosing top k = 5000 candidate ngrams based on our proposed ranking, we obtain a coverage of 87% for agreement and 89.64 for disagreement expression types which are reasonably good. Thus, we choose top 5000 candidate n-grams for each expression type and add them to the vocabulary beyond all unigrams. Like expression types 𝑒1…𝐸, we also ranked candidate phrases for topics 𝑡1…𝑇 using 𝑃(𝑅𝑒𝑙= 1|𝑡, 𝑝). However, for topics, selecting k based on coverage of each topic is more difficult because we induce 50 topics and it is also much more difficult to manually find relevant topical phrases in the sampled data as a topical phrase may belong to more than one topic. We selected top 2000 ranked candidate phrases for each topic using 𝑃(𝑅𝑒𝑙= 1|𝑡, 𝑝) as we feel that is sufficient for a topic. Note that phrases for topics are not as crucial as for AD-expressions because topics can more or less be defined by unigrams. 5 Classifying Pair Interaction Nature We now determine whether two users (also called a user pair) mostly agree or disagree with each other in their exchanges, i.e., their pair interaction or arguing nature. This is a relatively new task. We first summarize the closest related works. In (Galley et al., 2004; Hillard et al., 2003; Thomas et al., 2006, Bansal et al., 2008), conversational speeches (i.e., U.S. Congress meeting transcripts) are classified into for or against an issue using various types of features: durational (e.g., time taken by a speaker; speech rate, etc.), structural (e.g., no. of speakers per side, no. of votes cast by a speaker on a bill, etc.), and lexical (e.g., first word, last word, n-grams, etc.). Burfoot et al., (2011) builds on the work of (Thomas et al., 2006) and proposes collective classification using speaker contextual features (e.g., speaker intentions based on vote labels). However, above works do not discover pair interactions (arguing nature) in debate authors. Online discussion forums are textual rather than conversational (e.g., U.S. Congress meeting transcripts). Thus, the durational, structural, and contextual features used in prior works are not directly applicable. Instead, the model posterior on 𝜃𝑝𝐸 for JTE-P 676 can actually give an estimate of the overall interaction nature of a pair, i.e., the probability masses assigned to expression types, 𝑒= 𝐴𝑔(Agreement) and 𝑒= 𝐷𝑖𝑠𝐴𝑔 (Disagreement). As 𝜃𝑝 𝐸~𝐷𝑖𝑟(𝛼𝐸), we have 𝜃𝑝,𝑒=𝐴𝑔 𝐸 + 𝜃𝑝,𝑒=𝐷𝑖𝑠𝐴𝑔 𝐸 = 1. Hence, if the probability mass assigned to any one of the expression types (agreement, disagreement) > 0.5 then according to the model posterior, that expression type is dominant, i.e., if 𝜃𝑝,𝐴𝑔 𝐸 > 0.5, the pair is agreeing else disagreeing. However, this approach is not the best. As we will see in the experiment section, supervised classification using labeled training data with discovered AD-expressions as features performs better. 6 Empirical Evaluation We now evaluate the proposed techniques in the context of the JTE-P model. We first evaluate the discovered AD-expressions by comparing results with and without using the phrase ranking method in Section 4, and then evaluate the classification of interaction nature of pairs. 6.1 Dataset and Experiment Settings We crawled debate/discussion forum posts from Volconvo.com. The forum is divided into various domains. Each domain consists of multiple threads of discussions. For each post, we extracted the post id, author, domain, ids of all posts to which it replies/quotes, and the post content. In all, we extracted 26137, 34986, 22354, and 16525 posts from Politics, Religion, Society and Science domains respectively. Experiment Data: As it is not interesting to study pairs who only exchanged a few posts, we restrict to pairs with at least 20 post exchanges. This resulted in 1241 authors and 1461 pairs. The reduced dataset consists of 1095586 tokens (after n-gram preprocessing in §4), 40102 posts with an average of 27 posts or interactions per pair. Data from all 4 domains are combined for modeling. Parameter Settings: For all our experiments, we set the hyper-parameters to the heuristic values 𝛼𝑇 = 50/𝑇, 𝛼𝐸 = 50/𝐸, 𝛽𝑇 = 𝛽𝐸 = 0.1 suggested in (Griffiths and Steyvers, 2004). We set the number of topics, 𝑇 = 50 and the number of ADexpression types, 𝐸 = 2 (agreement and disagreement) as in discussion/debate forums, there are usually two expression types5. To learn 5 Values for 𝐸 > 2 were also tried. However, they did not produce any new dominant expression type. There was also a slight increase in the model perplexity showing that values of 𝐸 > 2 do not fit the debate forum data well. the Max-Ent parameters 𝜆, we randomly sampled 500 terms from the held-out data (10 threads in our corpus which were excluded from the evaluation of tasks in §6.2, §6.3) appearing at least 10 times and labeled them as topical (361) or AD-expressions (139) and used the corresponding features of each term (in the context of posts where it occurs, §3) to train the Max-Ent model. 6.2 AD-Expression Evaluation We first list some discovered top AD-expressions in Table 3 for qualitative inspection. From Table 3, we can see that JTE-P can cluster many correct AD-expressions, e.g., “I accept”, “I agree”, “you’re correct”, etc. in agreement and “I disagree”, “don’t accept”, “I refute”, etc. in disagreement. In addition, it also discovers and clusters highly specific and more “distinctive” expressions beyond those used in Max-Ent training, e.g., “valid point”, “I do support”, and “rightly said” in agreement; and phrases like “can you prove”, “I don’t buy your”, and “you fail to” in disagreement. Note that terms in black in Table 3 were used in Max-Ent training. The newly discovered terms are marked blue in italics. Clustering errors are in red (bold). For quantitative evaluation, topic models are often compared using perplexity. However, perplexity does not reflect our purpose since we are not trying to evaluate how well the ADexpressions in an unseen discussion data fit our learned models. Instead our focus is to evaluate how well our learned AD-expression types perform in clustering semantic phrases of agreement/disagreement. Since AD-expressions (according to top terms in 𝜑𝐸) produced by JTEP are rankings, we choose precision @ n (p@n) as our metric. p@n is commonly used to evaluate a ranking when the total number of correct items is unknown (e.g., Web search results, aspect terms in topic models for sentiment analysis (Zhao et al., 2010), etc.). This situation is similar to our AD-expression rankings, 𝜑𝐸. Further, as 𝜑𝐸~𝐷𝑖𝑟, the Dirichlet smoothing effect ensures that every term in the vocabulary has some nonzero mass to agreement or disagreement expression type. Thus, it is the ranking of terms in each AD-expression type that matters (i.e., whether the model is able to rank highly relevant terms at the top). The above method evaluates the original ranking. Another way of evaluating the ADexpression rankings is to evaluate only those newly discovered terms, i.e., beyond those 677 labeled terms used in Max-Ent training. For this evaluation, we remove those terms that have been used in Max-Ent (ME) training. We report both results in Table 4. We also studied interrater agreement using two judges who independently labeled the top n terms as correct or incorrect. A term was marked correct if both judges deemed it so which was then used to compute p@n. Agreement using 𝜅𝐶𝑜ℎ𝑒𝑛 was greater than 0.78 for all p@n computations implying substantial and good agreements as identifying whether a phrase implies agreement or disagreement or none is an easy task. P@n excluding ME labeled terms (Table 4, second column) are slightly lower than those using all terms but are still decent. This is because p@n excluding ME labeled terms removes many correct AD-expressions used in training. Further to evaluate the sensitivity of performance on the amount of labeled terms for Max-Ent, we computed p@n across different sizes of labeled terms. Table 4 shows p@n for agreement and disagreement expressions across different sizes of labeled terms (L). We find that more labeled terms improves p@n which is intuitive. We used 500 labeled terms in all our subsequent experiments. The result in Table 4 uses relevance ranking (§4). Disagreement expressions (𝜑𝑒=𝐷𝑖𝑠𝑎𝑔𝑟𝑒𝑒𝑚𝑒𝑛𝑡 𝐸 ) I, disagree, I don’t, I disagree, argument, reject, claim, I reject, I refute, and, your, I refuse, won’t, the claim, nonsense, I contest, dispute, I think, completely disagree, don’t accept, don’t agree, incorrect, doesn’t, hogwash, I don’t buy your, I really doubt, your nonsense, true, can you prove, argument fails, you fail to, your assertions, bullshit, sheer nonsense, doesn’t make sense, you have no clue, how can you say, do you even, contradict yourself, … Agreement expressions (𝜑𝑒=𝐴𝑔𝑟𝑒𝑒𝑚𝑒𝑛𝑡 𝐸 ) agree, I, correct, yes, true, accept, I agree, don’t, indeed correct, your, I accept, point, that, I concede, is valid, your claim, not really, would agree, might, agree completely, yes indeed, absolutely, you’re correct, valid point, argument, the argument, proves, do accept, support, agree with you, rightly said, personally, well put, I do support, personally agree, doesn’t necessarily, exactly, very well put, kudos, point taken, ... Table 3: Top terms (comma delimited) of two expression types. Red (bold) terms denote possible errors. Blue (italics) terms are newly discovered; rest (black) terms have been used in Max-Ent training. P@n L JTE-P (all terms) JTE-P (excluding labeled ME terms) Agreement Disagreement Agreement Disagreement 50 100 150 50 100 150 50 100 150 50 100 150 100 0.62 0.63 0.61 0.64 0.62 0.63 0.58 0.56 0.57 0.60 0.59 0.58 200 0.66 0.67 0.65 0.68 0.66 0.67 0.62 0.59 0.60 0.64 0.63 0.62 300 0.70 0.70 0.71 0.70 0.68 0.67 0.66 0.66 0.65 0.66 0.66 0.65 400 0.72 0.72 0.73 0.74 0.71 0.70 0.68 0.67 0.69 0.70 0.68 0.69 500 0.76 0.77 0.75 0.76 0.73 0.74 0.70 0.71 0.70 0.72 0.71 0.70 Table 4: Results using terms based on phrase relevance ranking for P @ n= 50, 100, 150 across 100, 200, …, 500 labeled examples (L) used for Max-Ent (ME) training. P@n L JTE-P (all terms) JTE-P (excluding ME terms) Agreement Disagreement Agreement Disagreement 50 100 150 50 100 150 50 100 150 50 100 150 500 0.66 0.69 0.69 0.72 0.70 0.70 0.66 0.65 0.64 0.68 0.66 0.65 Table 5: Results using all tokens (without applying phrase relevance ranking) for P@50, 100, 150 and 500 labeled examples were used for Max-Ent (ME) training). Feature Setting Agreeing Disagreeing P R F1 P R F1 JTE-P-posterior 0.59 0.61 0.60 0.81 0.70 0.75 W+POS 1-4 grams 0.63 0.66 0.64 0.83 0.82 0.82 W+POS 1-4grams + IG (top 1%) 0.64 0.67 0.65 0.84 0.82 0.83 W+POS 1-4 grams + IG (top 2%) 0.65 0.67 0.66 0.84 0.82 0.83 W+POS 1-4 grams + χ2 (top 1%) 0.65 0.68 0.66 0.84 0.83 0.83 W+POS 1-4 grams + χ2(top 2%) 0.64 0.68 0.69 0.84 0.82 0.83 AD-Expressions, Φ𝐸 (top 1000) 0.73 0.74 0.73 0.87 0.87 0.87 AD-Expressions, Φ𝐸 (top 2000) 0.77 0.81 0.78 0.90 0.88 0.89 Table 6: Precision (P), recall (R), and F1 scores of pair interaction evaluation. Improvements in F1 using AD-expression features (𝜑𝐸) are statistically significant (p<0.01) using paired t-test across 5-fold CV. 678 We now compare with the performance of the model without using phrase relevance ranking. P@n results using all tokens (4356787) are shown in Table 5 (with 500 labeled terms for Max-Ent training). Clearly, P@n is lower than in Table 4 (last row; with phrase relevance ranking) because without phrase relevance ranking (Table 5) many irrelevant terms can rank high due to cooccurrences which may not be semantically related. This shows that relevance ranking of phrases is beneficial. 6.3 Pair Interaction Nature We now evaluate the overall interaction nature of each pair of users. The evaluation of this task requires human judges to read all the posts where the two users forming the pair have interacted. Thus, it is hard to evaluate all 1461 pairs in our dataset. Instead, we randomly sampled 500 pairs (≈ 34% of the population) for evaluation. Two human judges were asked to independently read all the post interactions of 500 pairs and label each pair as overall “disagreeing” or overall “agreeing” or “none”. The 𝜅𝐶𝑜ℎ𝑒𝑛 for this task was 0.81. Pairs were finally labeled as agreeing or disagreeing if both judges deemed them so. This resulted in 320 disagreeing and 152 agreeing pairs. Out of the rest 28 pairs, 10 were marked “none” by both judges while 18 pairs had disagreement in labels. We only focus on the 472 agreeing and disagreeing pairs. As we have labeled data for 472 pairs, we can treat identifying pair arguing nature as a text classification problem where all interactions between a pair are merged in one document representing the pair along with the label given by judges: agreeing or disagreeing. To compare classification performance, we use two feature sets: (i) standard word + POS 1-4 grams and (ii) AD-expressions from 𝜑𝐸. We use TF-IDF as our feature value assignment scheme. We also try two well-known feature selection schemes ChiSquared Test (χ2) and Information Gain (IG). We use the linear kernel6 SVM (SVMlight system in (Joachims, 1999)) as our text classifier. For feature selection using χ2 and IG, we use two settings: top 1% and 2% of all features ranked according to the selection metric. Also, for estimated AD-expressions (according to probabilities in 𝜑𝐸), we experiment with top 1000 and 2000 AD-expressions terms for both agreement and disagreement. We summarize 6 Other kernels polynomial, RBF, and sigmoid did not perform as well. comparison results using 5-fold Cross Validation (CV) with two classes: agreeing and disagreeing in Table 6. JTE-P-posterior represents the method using simply the model posterior on 𝜃𝑝𝐸 to make the decision (see §5). From Table 6, we can make the following observations. Predicting agreeing arguing nature is harder than that of disagreeing across all feature settings. Feature selection improves performance. χ2 and IG perform similarly. AD-expressions, 𝜑𝐸yields the best performance showing that the discovered AD-expressions are of high quality and reflect the user pair arguing nature well. Selecting certain top terms in 𝜑𝐸 can also be viewed as a form of feature selection. Although prediction performance using model posterior (JTE-P-posterior) is slightly lower than supervised SVM (Table 6, second row), the F1 scores are decent. Using the discovered ADexpressions (Table 6, last low) as features renders a statistically significant (see Table 6 caption) improvement over other baseline feature settings. This shows that discovered ADexpressions are useful for downstream applications, e.g., the task of identifying pair interactions. 7 Conclusion This paper studied the problem of modeling user pair interactions in online discussions with the purpose of discovering the interaction or arguing nature of each author pair and various ADexpressions emitted in debates. A novel technique was also proposed to rank n-gram phrases where relevance based ranking was used in conjunction with a semi-supervised generative model. This method enables us to find better ADexpressions. Experiments using real-life online debate data showed the effectiveness of the model. In our future work, we intend to extend the model to account for stances, and issue specific interactions which would pave the way for user profiling and behavioral modeling. Acknowledgments We would like thank Sharon Meraz (Department of communication, University of Illinois at Chicago) and Dennis Chong (Department of Political Science, Northwestern University) for several valuable discussions. This work was supported in part by a grant from the National Science Foundation (NSF) under grant no. IIS1111092. 679 References Abu-Jbara, A., Dasigi, P., Diab, M. and Dragomir Radev. 2012. Subgroup detection in ideological discussions. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL-2012). Agrawal, R., Rajagopalan, S., Srikant, R., and Xu. Y. 2003. Mining newsgroups using networks arising from social behavior. In Proceedings of the International Conference on World Wide Web (WWW-2003). Ahmed, A and Xing, E. 2010. Staying informed: supervised and semi-supervised multi-view topical analysis of ideological perspective. In Proceedings of the Empirical Methods in Natural Language Processing (EMNLP-2010). Anand, P., Walker, M., Abbott, R., Tree, J., Bowmani, R., and Minor, M. 2011. Cats rule and dogs drool!: Classifying stance in online debate. In Proceedings of the 2nd Workshop on Computational Approaches to Subjectivity and Sentiment Analysis. Bansal, M., Cardie, C., and Lee, L. 2008. The power of negative thinking: Exploiting label disagreement in the min-cut classification framework. In Proceedings of the International Conference on Computational Linguistics (Short Paper). Blei, D., Ng, A., and Jordan, M. 2003. Latent Dirichlet Allocation. Journal of Machine Learning Research. Blei, D. and Lafferty J. 2009. Visualizing topics with multi-word expressions. Tech. Report. arXiv:0907.1013v1. Brody, S. and Elhadad, S. 2010. An Unsupervised Aspect-Sentiment Model for Online Reviews. In Proceedings of the Annual Conference of the North American Chapter of the ACL (NAACL-2010). Burfoot, C., Bird, S., and Baldwin, T. 2011. Collective Classification of Congressional Floor-Debate Transcripts. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL-2001). Chang, J., Boyd-Graber, J., Wang, C. Gerrish, S. Blei, D. 2009. Reading tea leaves: How humans interpret topic models. In Proceedings of the Neural Information Processing Systems (NIPS-2009). Chen, Z., Mukherjee, A., Liu, B., Hsu, M., Castellanos, M., Ghosh, R. 2013. Leveraging MultiDomain Prior Knowledge in Topic Models. In Proceedings of the International Joint Conference in Artificial Intelligence (IJCAI-2013). Choi, Y. and Cardie, C. 2010. Hierarchical sequential learning for extracting opinions and their attributes (Short Paper). In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL-2010). Du, L., Buntine, W. L., and Jin, H. 2010. Sequential Latent Dirichlet Allocation: Discover Underlying Topic Structures within a Document. In Proceedings of the IEEE International Conference on Data Mining (ICDM-2010). Erosheva, E., Fienberg, S. and Lafferty, J. 2004. Mixed membership models of scientific publications. In Proceedings of the National Academy of Sciences (PNAS-2004). Galley, M., McKeown, K., Hirschberg, J., and Shriberg, E. 2004. Identifying agreement and disagreement in conversational speech: Use of Bayesian networks to model pragmatic dependencies. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL-2004). Griffiths, T. and Steyvers, M. 2004. Finding scientific topics. In Proceedings of the National Academy of Sciences (PNAS-2004). Hansen, G. J., and Hyunjung, K. 2011. Is the media biased against me? A meta-analysis of the hostile media effect research. Communication Research Reports, 28, 169-179. Hillard, D., Ostendorf, M., and Shriberg, E. 2003. Detection of agreement vs. disagreement in meetings: Training with unlabeled data. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT-2003). Hassan, A. and Radev, D. 2010. Identifying text polarity using random walks. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL-2010). Hofmann, T. 1999. Probabilistic latent semantic analysis. In Proceedings of the Conference on Uncertainty in Artificial Intelligence (UAI-1999). Hu, M. and Liu, B. 2004. Mining and summarizing customer reviews. In Proceedings of the SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD-2004). Jo, Y. and Oh, A. 2011. Aspect and sentiment unification model for online review analysis. In Proceedings of the International Conference on Web Search and Data Mining (WSDM-2011). Joachims, T. Making large-Scale SVM Learning Practical. 1999. Advances in Kernel Methods - Support Vector Learning, B. Schölkopf and C. Burges and A. Smola (ed.), MIT-Press, 1999. Lafferty, J. and Zhai, C. 2003. Probabilistic relevance models based on document and query generation. Language Modeling and Information Retrieval. Landis, J. R. and Koch, G. G. 1977. The measurement of observer agreement for categorical data. Biometrics. Lin, C. and He, Y. 2009. Joint sentiment/topic model for sentiment analysis. In Proceedings of the 680 International Conference on Knowledge Management (CIKM-2009). Lin, W. H., and Hauptmann, A. 2006. Are these documents written from different perspectives?: a test of different perspectives based on statistical distribution divergence. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL-2006). Liu, B. 2012. Sentiment Analysis and Opinion Mining. Morgan & Claypool Publisher, USA. McCallum, A., Wang, X., and Corrada-Emmanuel, A. 2007. Topic and Role Discovery in Social Networks with Experiments on Enron and Academic Email. Journal of Artificial Intelligence Research. Mei, Q., Ling, X., Wondra, M., Su, H., and Zhai, C. 2007. Topic sentiment mixture: modeling facets and opinions in weblogs. In Proceedings of the International Conference on World Wide Web (WWW-2007). Mimno, D. and McCallum, A. 2007. Expertise modeling for matching papers with reviewers. In Proceedings of the SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD-2007). Mukherjee, A., Venkataraman, V., Liu, B., Meraz, S. 2013. Public Dialogue: Analysis of Tolerance in Online Discussions. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL-2013). Mukherjee, A. and Liu, B. 2012a. Mining Contentions from Discussions and Debates. Proceedings of SIGKDD Conference on Knowledge Discovery and Data Mining (KDD-2012). Mukherjee, A. and Liu, B. 2012b. Aspect Extraction through Semi-Supervised Modeling. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL-2012). Mukherjee, A. and Liu, B. 2012c. Analysis of Linguistic Style Accommodation in Online Debates. In Proceedings of the International Conference on Computational Linguistics (COLING-2012). Mukherjee, A. and Liu, B. 2012d. Modeling Review Comments. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL-2012). Murakami A. and Raymond, R. 2010. Support or Oppose? Classifying Positions in Online Debates from Reply Activities and Opinion Expressions. In Proceedings of the International Conference on Computational Linguistics (Coling-2010). Pang, B. and Lee, L. 2008. Opinion mining and sentiment analysis. Foundations and Trends in Information Retrieval. Popescu, A. and Etzioni, O. 2005. Extracting product features and opinions from reviews. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP-2005). Ramage, D., Hall, D., Nallapati, R, Manning, C. 2009. Labeled LDA: A supervised topic model for credit attribution in multi-labeled corpora. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP-2009). Rosen-Zvi, M., Griffiths, T., Steyvers, M., and Smith, P. 2004. The author-topic model for authors and documents. In Proceedings of the Conference on Uncertainty in Artificial Intelligence (UAI-2004). Sunstein, C. R. 2002. The law of group polarization. Journal of political philosophy. Somasundaran, S. and Wiebe, J. 2009. Recognizing stances in online debates. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing (ACL-IJCNLP-2009). Titov, I. and R. McDonald. 2008. Modeling online reviews with multi-grain topic models. In Proceedings of the International Conference on World Wide Web (WWW-2008). Thomas, M., Pang, B., and Lee, L. 2006. Get out the vote: Determining support or opposition from congressional floor-debate transcripts. In Proc. of the Conference on Empirical Methods in Natural Language Processing (EMNLP-2006). Tomokiyo, T., and Hurst, M. 2003. A language model approach to keyphrase extraction. In Proceedings of the ACL 2003 workshop on Multiword expressions: analysis, acquisition and treatment-Volume 18. Wallach, H. 2006. Topic modeling: Beyond bag of words. In Proceedings of the International Conference on Machine Learning (ICML-2006). Wang, X., McCallum, A., Wei, X. 2007. Topical Ngrams: Phrase and topic discovery, with an application to information retrieval. In Proceedings of the IEEE International Conference on Data Mining (ICDM-2007). Wiebe, J. 2000. Learning subjective adjectives from corpora. In Proc. of National Conference on AI (AAAI-2000). Yano, T., Cohen, W. and Smith, N. 2009. Predicting response to political blog posts with topic models. In Proceedings of the N. American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT-2009). Zhao, X., J. Jiang, J. He, Y. Song, P. Achananuparp, E.P. LiM, and X. Li. 2011. Topical keyphrase extraction from twitter. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL-2011). Zhao, X., Jiang, J., Yan, H., and Li, X. 2010. Jointly modeling aspects and opinions with a MaxEntLDA hybrid. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP-2010). 681
2013
66
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 682–691, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Multilingual Affect Polarity and Valence Prediction in Metaphor-Rich Texts Zornitsa Kozareva USC Information Sciences Institute 4676 Admiralty Way Marina del Rey, CA 90292-6695 [email protected] Abstract Metaphor is an important way of conveying the affect of people, hence understanding how people use metaphors to convey affect is important for the communication between individuals and increases cohesion if the perceived affect of the concrete example is the same for the two individuals. Therefore, building computational models that can automatically identify the affect in metaphor-rich texts like “The team captain is a rock.”, “Time is money.”, “My lawyer is a shark.” is an important challenging problem, which has been of great interest to the research community. To solve this task, we have collected and manually annotated the affect of metaphor-rich texts for four languages. We present novel algorithms that integrate triggers for cognitive, affective, perceptual and social processes with stylistic and lexical information. By running evaluations on datasets in English, Spanish, Russian and Farsi, we show that the developed affect polarity and valence prediction technology of metaphor-rich texts is portable and works equally well for different languages. 1 Introduction Metaphor is a figure of speech in which a word or phrase that ordinarily designates one thing is used to designate another, thus making an implicit comparison (Lakoff and Johnson, 1980; Martin, 1988; Wilks, 2007). For instance, in “My lawyer is a shark” the speaker may want to communicate that his/her lawyer is strong and aggressive, and that he will attack in court and persist until the goals are achieved. By using the metaphor, the speaker actually conveys positive affect because having an aggressive lawyer is good if one is being sued. There has been a substantial body of work on metaphor identification and interpretation (Wilks, 2007; Shutova et al., 2010). However, in this paper we focus on an equally interesting, challenging and important problem, which concerns the automatic identification of affect carried by metaphors. Building such computational models is important to understand how people use metaphors to convey affect and how affect is expressed using metaphors. The existence of such models can be also used to improve the communication between individuals and to make sure that the speakers perceived the affect of the concrete metaphor example in the same way. The questions we address in this paper are: “How can we build computational models that can identify the polarity and valence associated with metaphor-rich texts?” and “Is it possible to build such automatic models for multiple languages?”. Our main contributions are: • We have developed multilingual metaphorrich datasets in English, Spanish, Russian and Farsi that contain annotations of the Positive and Negative polarity and the valence (from −3 to +3 scale) corresponding to the intensity of the affect conveyed in the metaphor. • We have proposed and developed automated methods for solving the polarity and valence tasks for all four languages. We model the polarity task as a classification problem, while the valence task as a regression problem. • We have studied the influence of different information sources like the metaphor itself, the context in which it resides, the source and 682 target domains of the metaphor, in addition to contextual features and trigger word lists developed by psychologists (Tausczik and Pennebaker, 2010). • We have conducted in depth experimental evaluation and showed that the developed methods significantly outperform baseline methods. The rest of the paper is organized as follows. Section 2 describes related work, Section 3 briefly talks about metaphors. Sections 4 and 5 describe the polarity classification and valence prediction tasks for affect of metaphor-rich texts. Both sections have information on the collected data for English, Spanish, Russian and Farsi, the conducted experiments and obtained results. Finally, we conclude in Section 6. 2 Related Work A substantial body of work has been done on determining the affect (sentiment analysis) of texts (Kim and Hovy, 2004; Strapparava and Mihalcea, 2007; Wiebe and Cardie, 2005; Yessenalina and Cardie, 2011; Breck et al., 2007). Various tasks have been solved among which polarity and valence identification are the most common. While polarity identification aims at finding the Positive and Negative affect, valence is more challenging as it has to map the affect on a [−3, +3] scale depending on its intensity (Polanyi and Zaenen, 2004; Strapparava and Mihalcea, 2007). Over the years researchers have developed various approaches to identify polarity of words (Esuli and Sebastiani, 2006), phrases (Turney, 2002; Wilson et al., 2005), sentences (Choi and Cardie, 2009) even documents (Pang and Lee, 2008). Multiple techniques have been employed, from various machine learning classifiers, to clustering and topic models. Various domains and textual sources have been analyzed such as Twitter, Blogs, Web documents, movie and product reviews (Turney, 2002; Kennedy and Inkpen, 2005; Niu et al., 2005; Pang and Lee, 2008), but yet what is missing is affect analyzer for metaphor-rich texts. While the affect of metaphors is well studied from its linguistic and psychological aspects (Blanchette et al., 2001; Tomlinson and Love, 2006; Crawdord, 2009), to our knowledge the building of computational models for polarity and valence identification in metaphor-rich texts is still a novel task (Smith et al., 2007; Veale, 2012; Veale and Li, 2012; Reyes and Rosso, 2012; Reyes et al., 2013). Little (almost no) effort has been put into multilingual computational affect models of metaphor-rich texts. Our research specifically targets the resolution of these problems and shows that it is possible to build such computational models. The experimental result provide valuable contributions and fundings, which could be used by the research community to build upon. 3 Metaphors Although there are different views on metaphor in linguistics and philosophy (Black, 1962; Lakoff and Johnson, 1980; Gentner, 1983; Wilks, 2007), the common among all approaches is the idea of an interconceptual mapping that underlies the production of metaphorical expressions. There are two concepts or conceptual domains: the target (also called topic in the linguistics literature) and the source (or vehicle), and the existence of a link between them gives rise to metaphors. The texts “Your claims are indefensible.” and “He attacked every weak point in my argument.” do not directly talk about argument as a war, however the winning or losing of arguments, the attack or defense of positions are structured by the concept of war. There is no physical battle, but there is a verbal battle and the structure of an argument (attack, defense) reflects this (Lakoff and Johnson, 1980). As we mentioned before, there has been a lot of work on the automatic identification of metaphors (Wilks, 2007; Shutova et al., 2010) and their mapping into conceptual space (Shutova, 2010a; Shutova, 2010b), however these are beyond the scope of this paper. Instead we focus on an equally interesting, challenging and important problem, which concerns the automatic identification of affect carried by metaphors. To conduct our study, we use human annotators to collect metaphor-rich texts (Shutova and Teufel, 2010) and tag each metaphor with its corresponding polarity (Positive/Negative) and valence [−3, +3] scores. The next sections describe the affect polarity and valence tasks we have defined, the collected and annotated metaphor-rich data for each one of the English, Spanish, Russian and Farsi languages, the conducted experiments and obtained results. 683 4 Task A: Polarity Classification 4.1 Problem Formulation Task Definition: Given metaphor-rich texts annotated with Positive and Negative polarity labels, the goal is to build an automated computational affect model, which can assign to previously unseen metaphors one of the two polarity classes. a tough pill to swallow values that gave our nation birth Clinton also came into office hoping to bridge Washington’s partisan divide. Thirty percent of our mortgages are underwater. The administration, in fact, could go further with the budget knife by eliminating the V-22 Osprey aircraft the 'things' are going to make sure their ox doesn't get gored Figure 1: Polarity Classification Figure 1 illustrates the polarity task in which the metaphors were classified into Positive or Negative. For instance, the metaphor “tough pill to swallow” has Negative polarity as it stands for something being hard to digest or comprehend, while the metaphor “values that gave our nation birth” has a Positive polarity as giving birth is like starting a new beginning. 4.2 Classification Algorithms We model the metaphor polarity task as a classification problem in which, for a given collection of N training examples, where mi is a metaphor and ci is the polarity of mi, the objective is to learn a classification function f : mi →ci in which 1 stands for positive polarity and 0 stands for negative polarity. We tested five different machine learning algorithms such as Nave Bayes, SVM with polynomial kernel, SVM with RBF kernel, AdaBoost and Stacking, out of which AdaBoost performed the best. In our experimental study, we use the freely available implementations in Weka (Witten and Frank, 2005). Evaluation Measures: To evaluate the goodness of the polarity classification algorithms, we calculate the f-score and accuracy on 10-fold cross validation. 4.3 Data Annotation To conduct our experimental study, we have used annotated data provided by the Language Computer Corporation (LCC)1, which developed anno1http://www.languagecomputer.com/ tation toolkit specifically for the task of metaphor detection, interpretation and affect assignment. They hired annotators to collect and annotate data for the English, Spanish, Russian and Farsi languages. The domain for which the metaphors were collected was Governance. It encompasses electoral politics, the setting of economic policy, the creation, application and enforcement of rules and laws. The metaphors were collected from political speeches, political websites, online newspapers among others (Mohler et al., 2013). The annotation toolkit allowed annotators to provide for each metaphor the following information: the metaphor, the context in which the metaphor was found, the meaning of the metaphor in the source and target domains from the perspective of a native speaker. For example, in the Context: And to all nations, we will speak for the values that gave our nation birth.; the annotators tagged the Metaphor: values that gave our nation birth; and listed as Source: mother gave birth to baby; and Target: values of freedom and equality motivated the creation of America. The same annotators also provided the affect associated with the metaphor. The agreements of the annotators as measured by LCC are: .83, .87, .80 and .61 for the English, Spanish, Russian and Farsi languages. In our study, the maximum length of a metaphor is a sentence, but typically it has the span of a phrase. The maximum length of a context is three sentences before and after the metaphor, but typically it has the span of one sentence before and after. In our study, the source and target domains are provided by the human annotators who agree on these definitions, however the source and target can be also automatically generated by an interpretation system or a concept mapper. The generation of source and target information is beyond the scope of this paper, but studying their impact on affect is important. At the same time, we want to show that if the technology for source/target detection and interpretation is not yet available, then how far can one reach by using the metaphor itself and the context around it. Later depending on the availability of the information sources and toolkits one can decide whether to integrate such information or to ignore it. In the experimental sections, we show how the individual information sources and their combination affects the resolution of the metaphor polarity and valence prediction tasks. Table 1 shows the positive and negative class 684 distribution for each one of the four languages. Negative Positive ENGLISH 2086 1443 SPANISH 196 434 RUSSIAN 468 418 FARSI 384 252 Table 1: Polarity Class Distribution for Four Languages The majority of the the annotated examples are for English. However, given the difficulty of finding bilingual speakers, we still managed to collect around 600 examples for Spanish and Farsi, and 886 examples for Russian. 4.4 N-gram Evaluation and Results N-gram features are widely used in a variety of classification tasks, therefore we also use them in our polarity classification task. We studied the influence of unigrams, bigrams and a combination of the two, and saw that the best performing feature set consists of the combination of unigrams and bigrams. In this paper, we will refer from now on to n-grams as the combination of unigrams and bigrams. Figure 2 shows a study of the influence of the different information sources and their combination with n-gram features for English. !"#$ !%#$ !&#$ !'#$ !(#$ !)#$ !*#$ !+#$ !,#$ %$ -./01234$ 53647.$ /048./$ 53647.9/048./$ 73:/.;/$ -./01234953647.9/048./$ 73:/.;/953647.9/048./$ Figure 2: Influence of Information Sources for Metaphor Polarity Classification of English Texts For each information source (metaphor, context, source, target and their combinations), we built a separate n-gram feature set and model, which was evaluated on 10-fold cross validation. The results from this study show that for English, the more information sources one combines, the higher the classification accuracy becomes. Table 2 shows the influence of the information sources for Spanish, Russian and Farsi with the ngram features. The best f-scores for each language are shown in bold. For Farsi and Russian high performances are obtained both with the context and with the combination of the context, source and target information. While for Spanish they reach similar performance. SPANISH RUSSIAN FARSI Metaphor 71.6 71.0 62.4 Source 67.1 62.4 55.4 Target 68.9 67.2 62.4 Context 73.5 77.1 67.4 S+T 76.6 68.7 62.4 M+S+T 76.0 75.4 64.2 C+S+T 76.5 76.5 68.4 Table 2: N-gram features, F-scores on 10-fold validation for Spanish, Russian and Farsi 4.5 LIWC as a Proxy for Metaphor Polarity LIWC Repository: In addition to the n-gram features, we also used the Linguistic Inquiry and Word Count (LIWC) repository (Tausczik and Pennebaker, 2010), which has 64 word categories corresponding to different classes like emotional states, psychological processes, personal concerns among other. Each category contains a list of words characterizing it. For instance, the LIWC category discrepancy contains words like should, could among others, while the LIWC category inhibition contains words like block, stop, constrain. Previously LIWC was successfully used to analyze the emotional state of bloggers and tweeters (Quercia et al., 2011) and to identify deception and sarcasm in texts (Ott et al., 2011; Gonz´alez-Ib´a˜nez et al., 2011). When LIWC analyzes texts it generates statistics like number of words found in category Ci divided by the total number of words in the text. For our metaphor polarity task, we use LIWC’s statistics of all 64 categories and feed this information as features for the machine learning classifiers. LIWC repository contains conceptual categories (dictionaries) both for the English and Spanish languages. LIWC Evaluation and Results: In our experiments LIWC is applied to English and Spanish metaphor-rich texts since the LIWC category dictionaries are available for both languages. Table 3 shows the obtained accuracy and f-score results in English and Spanish for each one of the information sources. 685 ENGLISH SPANISH Acc Fscore Acc Fscore Metaphor 98.8 98.8 87.9 87.2 Source 98.6 98.6 97.3 97.3 Target 98.2 98.2 97.9 97.9 Context 91.4 91.4 93.3 93.2 S+T 98.0 98.0 76.3 75.5 M+S+T 95.8 95.7 86.8 86.0 C+S+T 87.9 88.0 79.2 78.5 Table 3: LIWC features, Accuracy and F-scores on 10-fold validation for English and Spanish The best performances are reached with individual information sources like metaphor, context, source or target instead of their combinations. The classifiers obtain similar performance for both languages. LIWC Category Relevance to Metaphor Polarity: We also study the importance and relevance of the LIWC categories for the metaphor polarity task. We use information gain (IG) to measure the amount of information in bits about the polarity class prediction, if the only information available is the presence of a given LIWC category (feature) and the corresponding polarity class distribution. IG measures the expected reduction in entropy (uncertainty associated with a random feature) (Mitchell, 1997). Figure 3 illustrates how certain categories occur more with the positive (in red color) vs negative (in green color) class. With the positive metaphors we observe the LIWC categories for present tense, social, affect and family, while for the negative metaphors we see LIWC categories for past tense, inhibition and anger. !"#$%$&'#&% ()*%!&)#+'% "',&)%-"$&,+).% !)&#&'$%$&'#&% !&)#+'"/%!)+'+0'#% 1"23/.%-"$&,+).% #+-3"/%!)&#&'-&% 45% 6% 5% Figure 3: LIWC category relevance to Metaphor Polarity In addition, we show in Figure 4 examples of the top LIWC categories according to IG ranking for each one of the information sources. !"#$%&'() *'+#",#) -'.(/") 0$(1"#) 2) 2) !""#$ /'+34) /'+34) %&'$ 5+1"6#) $+1"() $+1"() ()"*)"$ ("7$895#:) $;"/#) +,(-."/01-%$ /0(2$2"1("$ &'<") ='(>) 6="$()='(?6) 6="$()='(?6) 6="$()='(?6) 5+&5@58'+) !.,"1+$ ."#,3,&1$ 4&+%$ $;"/#) (/0-"$ ("7$895#:) (0+$ &'<") 5+&5@58'+) 5+1"6#) 5+1"6#) ='(>) ='(>) @7'/>) /'+6#($5+) 6#'%) 6&'.7?) /'.7?) ='.7?) &$#") >577) $++':"?) Figure 4: Example of LIWC Categories and Words For metaphor texts, these categories are I, conjuntion, anger, discrepancy, swear words among others; for contexts the categories are pronouns like I, you, past tense, friends, affect and so on. Our study shows that some of the LIWC categories are important across all information sources, but overall different triggers activate depending on the information source and the length of the text used. 4.6 Comparative study Figure 5 shows a comparison of the accuracy of our best performing approach for each language. For English and Spanish these are the LIWC models, while for Russian and Farsi these are the ngram models. We compare the performance of the algorithms with a majority baseline, which assigns the majority class to each example. For instance, in English there are 3529 annotated examples, of which 2086 are positive and 1443 are negative. Since the positive class is the predominant one for this language and dataset, a majority classifier would have .59 accuracy in returning the positive class as an answer. Similarly, we compute the majority baseline for the rest of the languages. !""#$%"&' (%)*$+,&'-%./0+1/' 2+3/$/1"/''' 4150+.6' 7898:' ;79<<' =>79?7' @A%1+.6' 7B97:' ?8988' =C79:C' D#..+%1' BB9::' ;C98C' =CE9<8' F%$.+' BC9C:' ?:9>:' =<<97:' Figure 5: Best Accuracy Model and Comparison against a Majority Baseline for Metaphor Polarity Classification As we can see from Figure 5 that all classifiers significantly outperform the majority base686 line. For Farsi the increment is +11.90, while for English the increment is +39.69. This means that the built classifiers perform much better than a random classifier. 4.7 Lessons Learned To summarize, in this section we have defined the task of polarity classification and we have presented a machine learning solution. We have used different feature sets and information sources to solve the task. We have conducted exhaustive evaluations for four different languages namely English, Spanish, Russian and Farsi. The learned lessons from this study are: (1) for n-gram usage, the larger the context of the metaphor, the better the classification accuracy becomes; (2) if present source and target information can further boost the performance of the classifiers; (3) LIWC is a useful resource for polarity identification in metaphor-rich texts; (4) analyzing the usages of tense like past vs. present and pronouns are important triggers for positive and negative polarity of metaphors; (5) some categories like family, social presence indicate positive polarity, while others like inhibition, anger and swear words are indicative of negative affect; (6) the built models significantly outperform majority baselines. 5 Task B: Valence Prediction 5.1 Problem Formulation Task Definition: Given metaphor-rich texts annotated with valence score (from −3 to +3), where −3 indicates strong negativity, +3 indicates strong positivity, 0 indicates neural, the goal is to build a model that can predict without human supervision the valence scores of new previously unseen metaphors. The administration, in fact, could go further with the budget knife by eliminating the V-22 Osprey aircraft Clinton also came into office hoping to bridge Washington’s partisan divide. values that gave our nation birth !"# !$# !%# a tough pill to swallow &"# the 'things' are going to make sure their ox doesn't get gored &$# Thirty percent of our mortgages are underwater. &%# Figure 6: Valence Prediction Figure 6 shows an example of the valence prediction task in which the metaphor-rich texts must be arranged by the intensity of the emotional state provoked by the texts. For instance, −3 corresponds to very strong negativity, −2 strong negativity, −1 weak negativity (similarly for the positive classes). In this task we also consider metaphors with neutral affect. They are annotated with the 0 label and the prediction model should be able to predict such intensity as well. For instance, the metaphor “values that gave our nation birth”, is considered by American people that giving birth sets new beginning and has a positive score +1, but “budget knife” is more positive +3 since tax cut is more important. As any sentiment analysis task, affect assignment of metaphors is also a subjective task and the produced annotations express the values, believes and understanding of the annotators. 5.2 Regression Model We model the valence task a regression problem, in which for a given metaphor m, we seek to predict the valence v of m. We do this via a parametrized function f:ˆv = f(m; w), where w ∈Rd are the weights. The objective is to learn w from a collection of N training examples {< mi, vi >}N i=1, where mi are the metaphor examples and vi ∈R is the valence score of mi. Support vector regression (Drucker et al., 1996) is a well-known method for training a regression model by solving the following optimization problem: min w∈Rs 1 2||w||2 + C N N X i=1 max(0, |vi −f(mi; w)| −ϵ) | {z } ϵ-insensitive loss function where C is a regularization constant and ϵ controls the training error. The training algorithm finds weights w that define a function f minimizing the empirical risk. Let h be a function from seeds into some vector-space representation ⊆Rd, then the function f takes the form: f(m; w) = h(m)T w = PN i=1 αiK(m, mi), where f is re-parameterized in terms of a polynomial kernel function K with dual weights αi. K measures the similarity between two metaphoric texts. Full details of the regression model and its implementation are beyond the scope of this paper; for more details see (Sch¨olkopf and Smola, 2001; Smola et al., 2003). In our experimental study, we use the freely available implementation of SVM in Weka (Witten and Frank, 2005). 687 Evaluation Measures: To evaluate the quality of the valence prediction model, we compare the actual valence score of the metaphor given by human annotators denoted with y against those valence scores predicted by the regression model denoted with x. We estimate the goodness of the regression model calculating both the correlation coefficient ccx,y = nP xiyi−P xi P yi √ nP x2 i −(P xi)2√ nP y2 i −(P yi)2 and the mean squared error msex,y = Pn i=i(x−ˆx) n . The two evaluation measures should be interpreted in the following manner. Intuitively the higher the correlation score is, the better the correlation between the actual and the predicted valence scores will be. Similarly the smaller the mean squared error rate, the better the regression model fits the valence predictions to the actual score. 5.3 Data Annotation To conduct our valence prediction study, we used the same human annotators from the polarity classification task for each one of the English, Spanish, Russian and Farsi languages. We asked the annotators to map each metaphor on a [−3, +3] scale depending on the intensity of the affect associated with the metaphor. Table 4 shows the distribution (number of examples) for each valence class and for each language. -3 -2 -1 0 +1 +2 +3 ENGLISH 1057 817 212 582 157 746 540 SPANISH 106 65 27 17 40 132 262 RUSSIAN 118 42 308 13 202 149 67 FARSI 147 117 120 49 91 63 98 Table 4: Valence Score Distribution for Each Language 5.4 Empirical Evaluation and Results For each language and information source we built separate valence prediction regression models. We used the same features for the regression task as we have used in the classification task. Those include n-grams (unigrams, bigrams and combination of the two), LIWC scores. Table 5 shows the obtained correlation coefficient (CC) and mean squared error (MSE) results for each one of the four languages (English, Spanish, Russian and Farsi) using the dataset described in Table 4. The Farsi and Russian regression models are based only on n-gram features, while the English and Spanish regression models have both n-gram and LIWC features. Overall, the CC for English and Spanish is higher when LIWC features are used. This means that the LIWC based valence regression model approximates the predicted values better to those of the human annotators. The better valence prediction happens when the metaphor itself is used by LIWC. The MSE for English and Spanish is the lowest, meaning that the prediction is the closest to those of the human annotators. In Russian and Farsi the lowest MSE is when the combined metaphor, source and target information sources are used. For English and Spanish the smallest MSE or so called prediction error is 1.52 and 1.30 respectively, while for Russian and Farsi is 1.62 and 2.13 respectively. 5.5 Lessons Learned To summarize, in this section we have defined the task of valence prediction of metaphor-rich texts and we have described a regression model for its solution. We have studied different feature sets and information sources to solve the task. We have conducted exhaustive evaluations in all four languages namely English, Spanish, Russian and Farsi. The learned lessons from this study are: (1) valence prediction is a much harder task than polarity classification both for human annotation and for the machine learning algorithms; (2) the obtained results showed that despite its difficulty this is still a plausible problem; (3) similarly to the polarity classification task, valence prediction with LIWC is improved when shorter contexts (the metaphor/source/target information source) are considered. 6 Conclusion People use metaphor-rich language to express affect and often affect is expressed through the usage of metaphors. Therefore, understanding that the metaphor “I was boiling inside when I saw him.” has Negative polarity as it conveys feeling of anger is very important for interpersonal or multicultural communications. In this paper, we have introduced a novel corpus of metaphor-rich texts for the English, Spanish, Russian and Farsi languages, which was manually annotated with the polarity and valence scores of the affect conveyed by the metaphors. We have studied the impact of different information sources such as the metaphor in isolation, the context in which the metaphor was used, the source and target domain meanings of the metaphor and 688 RUSSIAN N-gram FARSI N-gram ENGLISH N-gram SPANISH N-gram ENGLISH LIWC SPANISH LIWC CC MSE CC MSE CC MSE CC MSE CC MSE CC MSE Metaphor .45 1.71 .25 2.25 .36 2.50 .37 2.54 .74 1.52 .87 1.20 Source .22 1.89 .11 2.42 .40 2.27 .22 2.43 .81 1.30 .85 1.28 Target .25 1.91 .15 2.47 .37 2.41 .32 2.36 .72 1.56 .85 1.29 Context .43 1.83 .32 2.38 .37 2.59 .40 2.37 .40 2.16 .67 1.92 S+T .29 1.83 .18 2.38 .40 2.40 .41 2.19 .70 1.60 .78 1.53 M+S+T .45 1.62 .29 2.13 .43 2.34 .43 2.14 .67 1.67 .78 1.53 C+S+T .42 1.85 .26 2.61 .43 2.52 .39 2.41 .44 2.08 .64 1.96 Table 5: Valence Prediction, Correlation Coefficient and Mean Squared Error for English, Spanish, Russian and Farsi their combination in order to understand how such information helps and impacts the interpretation of the affect associated with the metaphor. We have conducted exhaustive evaluation with multiple machine learning classifiers and different features sets spanning from lexical information to psychological categories developed by (Tausczik and Pennebaker, 2010). Through experiments carried out on the developed datasets, we showed that the proposed polarity classification and valence regression models significantly improve baselines (from 11.90% to 39.69% depending on the language) and work well for all four languages. From the two tasks, the valence prediction problem was more challenging both for the human annotators and the automated system. The mean squared error in valence prediction in the range [−3, +3], where −3 indicates strong negative and +3 indicates strong positive affect for English, Spanish and Russian was around 1.5, while for Farsi was around 2. The current findings and learned lessons reflect the properties of the collected data and its annotations. In the future we are interested in studying the affect of metaphors for domains different than Governance. We want to conduct studies with the help of social sciences who would research whether the tagging of affect in metaphors depends on the political affiliation, age, gender or culture of the annotators. Not on a last place, we would like to improve the built valence prediction models and to collect more data for Spanish, Russian and Farsi. Acknowledgments The author would like to thank the reviewers for their helpful comments as well as the LCC annotators who have prepared the data and made this work possible. This research is supported by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Defense US Army Research Laboratory contract number W911NF12-C-0025. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoD/ARL, or the U.S. Government. References Max Black. 1962. Models and Metaphors. Isabelle Blanchette, Kevin Dunbar, John Hummel, and Richard Marsh. 2001. Analogy use in naturalistic settings: The influence of audience, emotion and goals. Memory and Cognition, pages 730–735. Eric Breck, Yejin Choi, and Claire Cardie. 2007. Identifying expressions of opinion in context. In Proceedings of the 20th international joint conference on Artifical intelligence, IJCAI’07, pages 2683– 2688. Morgan Kaufmann Publishers Inc. Yejin Choi and Claire Cardie. 2009. Adapting a polarity lexicon using integer linear programming for domain-specific sentiment classification. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 2 Volume 2, EMNLP ’09, pages 590–598. Elizabeth Crawdord. 2009. Conceptual metaphors of affect. Emotion Review, pages 129–139. Harris Drucker, Chris J.C. Burges, Linda Kaufman, Alex Smola, and Vladimir Vapnik. 1996. Support vector regression machines. In Advances in NIPS, pages 155–161. Andrea Esuli and Fabrizio Sebastiani. 2006. Sentiwordnet: A publicly available lexical resource for opinion mining. In In Proceedings of the 5th Conference on Language Resources and Evaluation (LREC06, pages 417–422. Dedre Gentner. 1983. Structure-mapping: A theoretical framework for analogy. Cognitive Science, 7(2):155–170. 689 Roberto Gonz´alez-Ib´a˜nez, Smaranda Muresa n, and Nina Wacholder. 2011. Identifying sarcasm in twitter: a closer look. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers - Volume 2, HLT ’11, pages 581–586. Alistair Kennedy and Diana Inkpen. 2005. Sentiment classification of movie and product reviews using contextual valence shifters. Computational Intelligence, pages 110–125. Soo-Min Kim and Eduard Hovy. 2004. Determining the sentiment of opinions. In Proceedings of the 20th international conference on Computational Linguistics, COLING ’04. George Lakoff and Mark Johnson. 1980. Metaphors We Live By. University of Chicago Press, Chicago. James H. Martin. 1988. Representing regularities in the metaphoric lexicon. In Proceedings of the 12th conference on Computational linguistics - Volume 1, COLING ’88, pages 396–401. Thomas M. Mitchell. 1997. Machine Learning. McGraw-Hill, Inc., 1 edition. Michael Mohler, David Bracewell, David Hinote, and Marc Tomlinson. 2013. Semantic signatures for example-based linguistic metaphor detection. In The Proceedings of the First Workshop on Metaphor in NLP, (NAACL), pages 46–54. Yun Niu, Xiaodan Zhu, Jianhua Li, and Graeme Hirst. 2005. Analysis of polarity information in medical text. In In: Proceedings of the American Medical Informatics Association 2005 Annual Symposium, pages 570–574. Myle Ott, Yejin Choi, Claire Cardie, and Jeffrey T. Hancock. 2011. Finding deceptive opinion spam by any stretch of the imagination. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1, HLT ’11, pages 309–319. Bo Pang and Lillian Lee. 2008. Opinion mining and sentiment analysis. Found. Trends Inf. Retr., 2(12):1–135, January. Livia Polanyi and Annie Zaenen. 2004. Contextual lexical valence shifters. In Yan Qu, James Shanahan, and Janyce Wiebe, editors, Proceedings of the AAAI Spring Symposium on Exploring Attitude and Affect in Text: Theories and Applications. AAAI Press. AAAI technical report SS-04-07. Daniele Quercia, Jonathan Ellis, Licia Capra, and Jon Crowcroft. 2011. In the mood for being influential on twitter. In the 3rd IEEE International Conference on Social Computing. Antonio Reyes and Paolo Rosso. 2012. Making objective decisions from subjective data: Detecting irony in customer reviews. Decis. Support Syst., 53(4):754–760, November. Antonio Reyes, Paolo Rosso, and Tony Veale. 2013. A multidimensional approach for detecting irony in twitter. Lang. Resour. Eval., 47(1):239–268, March. Bernhard Sch¨olkopf and Alexander J. Smola. 2001. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond (Adaptive Computation and Machine Learning). The MIT Press. Ekaterina Shutova and Simone Teufel. 2010. Metaphor corpus annotated for source - target domain mappings. In International Conference on Language Resources and Evaluation. Ekaterina Shutova, Lin Sun, and Anna Korhonen. 2010. Metaphor identification using verb and noun clustering. In Proceedings of the 23rd International Conference on Computational Linguistics, COLING ’10, pages 1002–1010. Ekaterina Shutova. 2010a. Automatic metaphor interpretation as a paraphrasing task. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT ’10, pages 1029–1037. Ekaterina Shutova. 2010b. Models of metaphor in nlp. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL ’10, pages 688–697. Catherine Smith, Tim Rumbell, John Barnden, Bob Hendley, Mark Lee, and Alan Wallington. 2007. Don’t worry about metaphor: affect extraction for conversational agents. In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, ACL ’07, pages 37– 40. Association for Computational Linguistics. Alex J. Smola, Bernhard Schlkopf, and Bernhard Sch Olkopf. 2003. A tutorial on support vector regression. Technical report, Statistics and Computing. Carlo Strapparava and Rada Mihalcea. 2007. Semeval2007 task 14: Affective text. In Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), pages 70–74. Association for Computational Linguistics, June. Yla R. Tausczik and James W. Pennebaker. 2010. The Psychological Meaning of Words: LIWC and Computerized Text Analysis Methods. Journal of Language and Social Psychology, 29(1):24–54, March. Marc T. Tomlinson and Bradley C. Love. 2006. From pigeons to humans: grounding relational learning in concrete examples. In Proceedings of the 21st national conference on Artificial intelligence - Volume 1, AAAI’06, pages 199–204. AAAI Press. Peter D. Turney. 2002. Thumbs up or thumbs down?: semantic orientation applied to unsupervised classification of reviews. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL ’02, pages 417–424. 690 Tony Veale and Guofu Li. 2012. Specifying viewpoint and information need with affective metaphors: a system demonstration of the metaphor magnet web app/service. In Proceedings of the ACL 2012 System Demonstrations, ACL ’12, pages 7–12. Tony Veale. 2012. A context-sensitive, multi-faceted model of lexico-conceptual affect. In The 50th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, pages 75–79. Janyce Wiebe and Claire Cardie. 2005. Annotating expressions of opinions and emotions in language. language resources and evaluation. In Language Resources and Evaluation (formerly Computers and the Humanities. Yorick Wilks. 2007. A preferential, pattern-seeking, semantics for natural language inference. In Words and Intelligence I, volume 35 of Text, Speech and Language Technology, pages 83–102. Springer Netherlands. Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing contextual polarity in phraselevel sentiment analysis. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, HLT ’05, pages 347–354. Ian H. Witten and Eibe Frank. 2005. Data Mining: Practical Machine Learning Tools and Techniques. Morgan Kaufmann, second edition. Ainur Yessenalina and Claire Cardie. 2011. Compositional matrix-space models for sentiment analysis. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ’11, pages 172–182. 691
2013
67
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 692–700, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Large tagset labeling using Feed Forward Neural Networks. Case study on Romanian Language Tiberiu Boro Radu Ion Dan Tufi Research Institute for $UWLILFLDO,QWHOOLJHQFH³0LKDL DrJQHVFX´ Romanian Academy Research Institute for $UWLILFLDO,QWHOOLJHQFH³0LKDL DrJQHVFX´ Romanian Academy Research Institute for $UWLILFLDO,QWHOOLJHQFH³0LKDL DrJQHVFX´ Romanian Academy [email protected] [email protected] [email protected] Abstract Standard methods for part-of-speech tagging suffer from data sparseness when used on highly inflectional languages (which require large lexical tagset inventories). For this reason, a number of alternative methods have been proposed over the years. One of the most successful methods used for this task, FDOOHG7LHUHG7DJJLQJ 7XIL, 1999), exploits a reduced set of tags derived by removing several recoverable features from the lexicon morpho-syntactic descriptions. A second phase is aimed at recovering the full set of morpho-syntactic features. In this paper we present an alternative method to Tiered Tagging, based on local optimizations with Neural Networks and we show how, by properly encoding the input sequence in a general Neural Network architecture, we achieve results similar to the Tiered Tagging methodology, significantly faster and without requiring extensive linguistic knowledge as implied by the previously mentioned method. 1 Introduction Part-of-speech tagging is a key process for various tasks such as `information extraction, text-to-speech synthesis, word sense disambiguation and machine translation. It is also known as lexical ambiguity resolution and it represents the process of assigning a uniquely interpretable label to every word inside a sentence. The labels are called POS tags and the entire inventory of POS tags is called a tagset. There are several approaches to part-of-speech tagging, such as Hidden Markov Models (HMM) (Brants, 2000), Maximum Entropy Classifiers (Berger et al., 1996; Ratnaparkhi, 1996), Bayesian Networks (Samuelsson, 1993), Neural Networks (Marques and Lopes, 1996) and Conditional Random Fields (CRF) (Lafferty et al., 2001). All these methods are primarily intended for English, which uses a relatively small tagset inventory, compared to highly inflectional languages. For the later mentioned languages, the lexicon tagsets (called morphosyntactic descriptions (Calzolari and Monachini, 1995) or MSDs) may be 10-20 times or even larger than the best known tagsets for English. For instance Czech MSD tagset requires more than 3000 labels (Collins et al., 1999), Slovene more than 2000 labels (Erjavec and Krek, 2008), and Romanian more than 1100 labels (Tufi, 1999). The standard tagging methods, using such large tagsets, face serious data sparseness problems due to lack of statistical evidence, manifested by the non-robustness of the language models. When tagging new texts that are not in the same domain as the training data, the accuracy decreases significantly. Even tagging in-domain texts may not be satisfactorily accurate. One of the most successful methods used for this taVN FDOOHG 7LHUHG 7DJJLQJ 7XIL, 1999), exploits a reduced set of tags derived by removing several recoverable features from the lexicon morpho-syntactic descriptions. According to the MULTEXT EAST lexical specifications (Erjavec and Monachini, 1997), the Romanian tagset consists of a number of 614 MSD tags (by exploiting the case and gender regular syncretism) for wordforms and 10 punctuation tags (Tufi et al., 1997), which is still significantly larger than the tagset of English. The MULTEX EAST version 4 (Erjavec, 2010) contains specifications for a total of 16 languages: Bulgarian, Croatian, Czech, Estonian, English, Hungarian, Romanian, 692 693 In the case of out-of-vocabulary (OOV) words, both approaches use suffix analysis to determine the most probable tags that can be assigned to the current word. To clarify how these two methods work, if we want to train the network to label the current word, using a context window of 1 (previous tag, current possible tags, and possible tags for the next word) and if we have, say 100 tags in the tagset, the input is a real valued vector of 300 sub-unit elements and the output is a vector which contains 100 elements, also sub-unit real numbers. As mentioned earlier, each value in the output vector corresponds to a distinct tag from tagset and the tag assigned to the current word is chosen to correspond to the maximum value inside the output vector. The previously proposed methods still suffer from the same issue of data sparseness when applied to MSD tagging. However, in our approach, we overcome the problem through a different encoding of the input data (see section 2.1). The power of neural networks results mainly from their ability to attain activation functions over different patterns via their learning algorithm. By properly encoding the input sequence, the network chooses which input features contribute in determining the output features for MSDs (e.g. patterns composed of part of speech, gender, case, type etc. contribute independently in selecting the optimal output sequence). This way, we removed the need for explicit MSD to CTAG conversion and MSD recovery from CTAGs. 2.1 The MSD binary encoding scheme A MSD language independently encodes a part of speech (POS) with the associated lexical attribute values as a string of positional ordered character codes (Erjavec, 2004). The first character is an upper case character denoting the SDUWRIVSHHFK HJµ1¶ IRUQRXQVµ9¶IRUYHUEV µ$¶ IRU DGMHFWLYHV HWF  DQG WKH IROORZLQJ FKDUDFWHUV ORZHU OHWWHUV RU µ-µ specify the instantiations of the characteristic lexical attributes of the POS. For example, the MSD µ1FIVUQ¶ specifies a noun (the first character is µ1¶  the type of ZKLFK LV FRPPRQ µF¶ WKH second character), feminine gender µI¶ VLQJXODU number µV¶ LQQRPLQDWLYHDFFXVDWLYHFDVH µU¶  and indefinite form µQ¶ If a specific attribute is not relevant for a language, or for a given combination of feature-YDOXHVWKHFKDUDFWHUµ-¶LV used in the corresponding position. For a language which does not morphologically mark the gender and definiteness features, the earlier H[HPSOLILHG06'ZLOOEHHQFRGHGDVµ1F-sr-¶ In order to derive a binary vector for each of the 614 MSDs of Romanian we proceeded to: 1. List and sort all possible POSes of Romanian (16 POSes) and form a binary vector with 16 positions in which position k is equal 1 only if the respective MSD has the corresponding POS (i.e. the k-th POS in the sorted list of POSes); 2. List and sort all possible values of all lexical attributes GLVUHJDUGLQJWKHZLOGFDUGµ-µ for all POSes (94 values) and form another binary vector with 94 positions such that the k-th position of this vector is 1 if the respective MSD has an attribute with the corresponding value; 3. Concatenate the vectors from steps 1 and 2 and obtain the binary codification of a MSD as a 110-position binary vector. 2.2 The training and tagging procedure The tagger automatically assigns four dummy tokens (two at the beginning and two at the end) to the target utterance and the neural network is trained to automatically assign a MSD given the context (two previously assigned tags and the possible tags for the current and following two words) of the current word (see below for details). In our framework a training example consists of the features extracted for a single word inside an utterance as input and it¶s MSD within that utterance as output. The features are extracted from a window of 5 words centered on the current word. A single word is characterized by a vector that encodes either its assigned MSD or its possible MSDs. To encode the possible MSDs we use equation 2, where each possible attribute a, has a single corresponding position inside the encoded vector. 2:=S; L %:Sá =; %:S; (2) Note that we changed the probability estimates to account for attributes not tags. To be precise, for every word wk, we obtain its input features by concatenating a number of 5 vectors. The first two vectors encode the MSDs assigned to the previous two words (wk-1 and wk694 2).The next three vectors are used to encode the possible MSDs for the current word (wk) and the following two words (wk+1 and wk+2). During training, we also compute a list of suffixes with associated MSDs, which is used at run-time to build the possible MSDs vector for unknown words. When such words are found within the test data, we approximate their possible MSDs vector using a variation of the method proposed by Brants (2000). When the tagger is applied to a new utterance, the system iteratively calculates the output MSD for each individual word. Once a label has been assigned to a word, the ZRUG¶VDVVRFLDWHGYHFWRU is edited so it will have the value of 1 for each attribute present in its newly assigned MSD. As a consequence of encoding each individual attribute separately for MSDs, the tagger can assign new tags (that were never associated with the current word in the training corpus). Although this is a nice behavior for dealing with unknown words it is often the case that it assigns attribute values that are not valid for the wordform. To overcome these types of errors we use an additional list of words with their allowed MSDs. For an OOV word, the list is computed as a union from all MSDs that appeared with the suffixes that apply to that word. When the tagger has to assign a MSD to a given word, it selects one from the possible wordform¶V MSDs in its wordform/MSDs associated list using a simple distance function: ‹ ØÐÉ Í KÞ F AÞ á Þ@4 (3) 2 - The list of all possible MSDs for the given word J - The length of the MSD encoding (110 bits) K - The output of the Neural Network for the current word A - Binary encoding for a MSD in P 3 Network hyperparameters In our experiments, we used a fully connected, feed forward neural network with 3 layers (1 input layer, 1 hidden layer and 1 output layer) and a sigmoid activation function (equation 3). While other network architectures such as recurrent neural networks may prove to be more suitable for this task, they are extremely hard to train, thus, we traded the advantages of such architectures for the robustness and simplicity of the feed-forward networks. B:P; L s s E A?ç (3) B:P; - Neuron output P - The weighted sum of all the neuron outputs from the previous layer Based on the size of the vectors used for MSD encoding, the output layer has 110 neurons and the input layer is composed of 550 (5 x 110) neurons. In order to fully characterize our system, we took into account the following parameters: accuracy, runtime speed, training speed, hidden layer configuration and the number of optimal training iterations. These parameters have complex dependencies and relations among each other. For example, the accuracy, the optimal number of training iterations, the training and the runtime speed are all highly dependent on the hidden layer configuration. Small hidden layer give high training and runtime speeds, but often under-fit the data. If the hidden layer is too large, it can easily over-fit the data and also has a negative impact on the training and runtime speed. The number of optimal training iterations changes with the size of the hidden layer (larger layers usually require more training iterations). To obtain the trade-offs between the above mentioned parameters we devised a series of experiments, in all of which we used WKH³´ MSD annotated corpus, which is composed of 118,025 words. We randomly kept out approximately 1/10 (11,960 words) of the training corpus for building a cross-validation set. The baseline accuracy on the cross-validation set (i.e. returning the most probable tag) is 93.29%. We also used an additional inflectional wordform/MSD lexicon composed of approximately 1 million hand-validated entries. 695 The first experiment was designed to determine the trade-off between the run-time speed and the size of the hidden layer. We made a series of experiments disregarding the tagging accuracy. Hidden size Time (ms) Words/sec 50 1530 7816 70 1888 6334 90 2345 5100 110 2781 4300 130 3518 3399 150 5052 2367 170 5466 2188 190 6734 1776 210 7096 1685 230 8332 1435 250 9576 1248 270 10350 1155 290 11080 1079 310 12364 967 Table 1 - Execution time vs. number of neurons on the hidden layer Because, for a given number of neurons in the hidden layer, the tagging speed is independent on the tagging accuracy, we partially trained (using one iteration and only 1000 training sentences) several network configurations. The first network only had 50 neurons in the hidden layer and for the next networks, we incremented the hidden layer size by 20 neurons until we reached 310 neurons. The total number of tested networks is 14. After this, we measured the time it took to tag the 1984 test corpus (11,960 words) for each individual network, as an average of 3 tagging runs in order to reduce the impact of the operating system load on the tagger (Table 1 shows the figures). Determining the optimal size of the hidden layer is a very delicate subject and there are no perfect solutions, most of them being based on trial and error: small-sized hidden layers lead to under-fitting, while large hidden layers usually cause over-fitting. Also, because of the trade-off between runtime speed and the size of hidden layers, and if runtime speed is an important factor in a particular NLP application, then hidden layers with smaller number of neurons are preferable, as they surely do not over-fit the data and offer a noticeable speed boost. hidden layer Train set accuracy Cross validation accuracy 50 99.18 97.95 70 99.20 98.02 90 99.27 98.03 110 99.29 98.05 130 99.35 98.12 150 99.35 98.09 170 99.41 98.07 190 99.40 98.10 210 99.40 98.21 Table 2 - Train and test accuracy rates for different hidden layer configurations As shown in Table 1, the runtime speed of our system shows a constant decay when we increase the hidden layer size. The same decay can be seen in the training speed, only this time by an order of magnitude larger. Because training a single network takes a lot of time, this experiment was designed to estimate the size of the hidden layer which offers good performance in tagging. To do this, we individually trained a number of networks in 30 iterations, using various hidden layer configurations (50, 70, 90, 0.97 0.975 0.98 0.985 0.99 0.995 1 1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61 65 69 73 77 81 85 89 93 97 Test set Train set Number of iterrations Accuracy Figure 2 - 130 hidden layer network test and train set tagging accuracy as a function of the number of iterations 696 110, 130, 150, 170, 190, and 210 neurons) and 5 initial random initializations of the weights. For each configuration, we stored the accuracy of reproducing the learning data (the tagging of the training corpus) and the accuracy on the unseen data (test sets). The results are shown in Table 2. Although a hidden layer of 210 neurons did not seem to over-fit the data, we stopped the experiment, as the training time got significantly longer. The next experiment was designed to see how the number of training iterations influences the tagging performance of networks with different hidden layer configurations. Intuitively, the training process must be stopped when the network begins to over-fit the data (i.e. the train set accuracy increases, but the test set accuracy drops). Our experiments indicate that this is not always the case, as in some situations the continuation of the training process leads to better results on the cross-validation data (as shown in Figure 2). So, the problem comes to determining which is the most stable configuration of the neural network (i.e. which hidden unit size will be most likely to return good results on the test set) and establish the number of iterations it takes for the system to be trained. To do this, we ran the training procedure for 100 iterations and for each training iteration, we computed the accuracy rate of every individual network on the cross-validation set (see Table 3 for the averaged values). As shown, the network configuration using 130 neurons on the hidden layer is most likely to produce better results on the cross-validation set regardless of the number of iterations. Although, some other configurations provided better figures for the maximum accuracy, their average accuracy is lower than that of the 130 hidden unit network. Other good candidates are the 90 and 110 hidden unit networks, but not the larger valued ones, which display a lower average accuracy and also significantly slower tagging speeds. The most suitable network configuration for a given task depends on the language, MSD encoding size, speed and accuracy requirements. In our own daily applications we use the 130 hidden unit network. After observing the behavior of the various networks on the crossvalidation set we determined that a good choice is to stop the training procedure after 40 iterations. Hidden units Avg. acc. Max. acc. St. dev. 50 97.94 98.31 0.127002 70 98.03 98.31 0.12197 50 97.94 98.37 0.139762 70 98.03 98.43 0.124996 90 98.07 98.39 0.134487 110 98.08 98.45 0.127109 130 98.14 98.44 0.136072 150 98.01 98.36 0.143324 170 97.94 98.36 0.122834 Table 3 - Average and maximum accuracy for various hidden layer configuration calculated over 100 training iterations on the test set To obtain the accuracy of the system, in our last experiment we used the 130 hidden unit network and we performed the training/testing procedure on the 1984 corpus, using 10-fold validation and 30 random initializations. The final accuracy was computed as an average between all the accuracy figures measured at the end of the training process (after 40 iterations). The first 1/10 of the 1984 corpus on which we tuned the hyperparameters was not included in the test data, but was used for training. The mean accuracy of the system (98.41%) was measured as an average of 270 values. 4 Comparison to other methods ,Q KLV ZRUN &HDXu (2006) presents a different approach to MSD tagging using the Maximum Entropy framework. He presents his results on the same corpus we used for training and testing (the 1984 corpus) and he compares his method (98.45% accuracy) with the Tiered Tagging methodology (97.50%) (Tufi and Dragomirescu, 2004). Our Neural Network approach obtained similar (slightly lower) results (98.41%), although it is arguable that our split/train procedure is not identical to the one used in his work (no details were given as how the 1/10 of the training corpus was selected). Also, our POS tagger detected cases where the annotation in the Gold Standard was erroneous. One such example LV LQ ³lame de ras´ (QJOLVK ³UD]RU EODGHV´  ZKHUH³ODPH´ English ³EODGHV´ LVDQRXQ³GH´ ³for´ LVDSUHSRVLWLRQDQG³UDV´ ³VKDYLQJ´) is a supine verb (with a past participle form) which was incorrectly annotated as a noun. 697 5 Network pattern analysis Using feed-forward neural networks gives the ability to outline what input features contribute to the selection of various MSD attribute values in the output layer which might help in reducing the tagset and thus, redesigning the network topology with beneficial effects both on the speed and accuracy. To determine what input features contribute to the selection of certain MSD attribute values, one can analyze the weights inside the neural network and extract the input Æ output links that are formed during training. We used the network with 130 units on the hidden layer, which was previously trained for 100 iterations. Based on the input encoding, we divided the features into 5 groups (one group for each MSD inside the local context ± two previous MSDs, current and following two possible MSDs). For a target attribute value (noun, gender feminine, gender masculine, etc.) and for each input group, we selected the top 3 input values which support the decision of assigning the target value to the attribute (features that increase the output value) and the top 3 features which discourage this decision (features that decrease the output value). For clarity, we will use the following notations for the groups: x G-2: group one ± the assigned MSD for the word at position i-2 x G-1: group two ± the assigned MSD for the word at position i-1 x G0: group three ± the possible MSDs for the word at position i x G1: group four± the possible MSDs for the word at position i+1 x G2: group five ± the possible MSDs for the word at position i+2 where i corresponds to the position of the word which is currently being tagged. Also, we classify the attribute values into two categories (C): (P) want to see (support the decision) and (N) GRQ¶WZDQWWRVHH (discourage the decision). Table 4 shows partial (G-1 G0 G1) examples of two target attribute values (cat=Noun and gender =Feminine) and their corresponding input features used for discrimination. Target value Group C Attribute values Noun G-1 P main (of a verb), article, masculine (gender of a noun/adjective N particle, conjunctive particle, auxiliary (of a verb), demonstrative (of a pronoun) G0 P noun, common/proper (of a noun) N adverb, pronoun, numeral, interrogative/relative (of a pronoun) G1 P genitive/dative (of a noun/adjective), particle, punctuation N conjunctive particle, strong (of a pronoun), non-definite (of a noun/adjective), exclamation mark Fem. G-1 P main (of a verb), preposition, feminine (of a noun/adjective) N auxiliary (of a verb), particle, demonstrative (of a pronoun) G0 P feminine (of a noun/adjective), nominative/accusative (of a noun/adjective), past (of a verb) N masculine (of a noun/adjective), auxiliary (of a verb), interrogative/relative (of a pronoun), adverb G1 P dative/genitive (of a noun/adjective), indicative (of a verb), feminine (of a noun/adjective) N conjunctive particle, future particle, nominative/accusative (of a noun/adjective) Table 4 ± P/N features for various attribute values. For instance, when deciding on whether to give a noun (N) label to current position (G0), we can see that the neural network has learned some interesting dependencies: at position G-1 we find an article (which frequently determines a noun) and at the current position it is very important for the word being tagged to actually be a common or proper noun (either by lexicon lookup or by suffix guessing) and not be an adverb, pronoun or numeral (POSes that cannot be found in the typical ambiguity class of a noun). At the next position of the target (G1) we also find a noun in genitive or dative, corresponding to a frequent construction in Romanian, HJ ³PDina ELDWXOXL´ EHLQJ D VHTXHQFH RI WZR nouns, the second at genitive/dative. If the neural network outputs the feminine gender to its current MSD, one may see that it 698 has actually learned the agreement rules (at least locally): the feminine gender is present both before (G-1) the target word as well as after it (G1). 6 Conclusions and future work We presented a new approach for large tagset part-of-speech tagging using neural networks. An advantage of using this methodology is that it does not require extensive knowledge about the grammar of the target language. When building a new MSD tagger for a new language one is only required to provide the training data and create an appropriate MSD encoding system and as shown, the MSD encoding algorithm is fairly simple and our proposed version works for any other MSD compatible encoding, regardless of the language. Observing which features do not participate in any decision helps design custom topologies for the Neural Network, and provides enhancements in both speed and accuracy. The configurable nature of our system allows users to provide their own MSD encodings, which permits them to mask certain features that are not useful for a given NLP application. If one wants to process a large amount of text and is interested only in assigning grammatical categories to words, he can use a MSD encoding in which he strips off all unnecessary features. Thus, the number of necessary neurons would decrease, which assures faster training and tagging. This is of course possible in any other tagging approaches, but our framework supports this by masking attributes inside the MSD encoding configuration file, without having to change anything else in the training corpus. During testing the system only verifies if the MSD encodings are identical and the displayed accuracy directly reflects the performance of the system on the simplified tagging schema. We also proposed a methodology for selecting a network configurations (i.e. number of hidden units), which best suites the application requirements. In our daily applications we use a network with 130 hidden units, as it provides an optimal speed/accuracy trade-off (approx. 3400 words per second with very good average accuracy). The tagger is implemented as part of a larger application that is primarily intended for text-tospeech (TTS) synthesis. The system is free for non-commercial use and we provide both web and desktop user-interfaces. It is part of the METASHARE platform and available online 2. Our primary goal was to keep the system language independent, thus all our design choices are based on the necessity to avoid using language specific knowledge, when possible. The application supports various NLP related tasks such as lexical stress prediction, syllabification, letter-to-sound conversion, lemmatization, diacritic restoration, prosody prediction from text and the speech synthesizer uses unit-selection. From the tagging perspective, our future plans include testing the system on other highly inflectional languages such as Czech and Slovene and investigating different methods for automatically determining a more suitable custom network topology, such as genetic algorithms. Acknowledgments The work reported here was funded by the project METANET4U by the European Commission under the Grant Agreement No 270893 2 http://ws.racai.ro:9191 699 References Berger, A. L., Pietra, V. J. D. and Pietra, S. A. D. 1996. A maximum entropy approach to natural language processing. Computational linguistics, 22(1), 39-71. Brants, T. 2000. TnT: a statistical part-of-speech tagger. In Proceedings of the sixth conference on applied natural language processing (pp. 224-231). Association for Computational Linguistics. Calzolari, N. and Monachini M. (eds.). 1995. Common Specifications and Notation for Lexicon Encoding and Preliminary Proposal for the Tagsets. MULTEXT Report, March. &HDXu, A. 2006. Maximum entropy tiered tagging. In Proceedings of the 11th ESSLLI Student Session (pp. 173-179). CollinV05DPVKDZ/+DMLþ-DQG7LOOPDQQ& 1999. A statistical parser for Czech. In Proceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics (pp. 505-512). Association for Computational Linguistics. Erjavec, T. and Monachini, M. (Eds.). 1997. Specifications and Notation for Lexicon Encoding. Deliverable D1.1 F. Multext-East Project COP106. Erjavec, T. 2004. MULTEXT-East version 3: Multilingual morphosyntactic specifications, lexicons and corpora. In Fourth International Conference on Language Resources and Evaluation, LREC (Vol. 4, pp. 1535-1538). Erjavec, T. and Krek, S. 2008. The JOS morphosyntactically tagged corpus of Slovene. In Proceedings of the Sixth International Conference RQ/DQJXDJH5HVRXUFHVDQG(YDOXDWLRQ/5(&¶ Erjavec, T. 2010. MULTEXT-East Version 4: Multilingual Morphosyntactic Specifications, Lexicons and Corpora. In Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10), Valletta, Malta. European Language Resources Association (ELRA) ISBN 2-9517408-6-7. Lafferty, J., McCallum, A. and Pereira, F. C. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. Marques, N. C. and Lopes, G. P. 1996. A neural network approach to part-of-speech tagging. In Proceedings of the 2nd Meeting for Computational Processing of Spoken and Written Portuguese (pp. 21-22). Ratnaparkhi, A. 1996. A maximum entropy model for part-of-speech tagging. In Proceedings of the conference on empirical methods in natural language processing (Vol. 1, pp. 133-142). Samuelsson, C. 1993. Morphological tagging based entirely on Bayesian inference. In 9th Nordic Conference on Computational Linguistics. Schmid, H. 1994. Part-of-speech tagging with neural networks. In Proceedings of the 15th conference on Computational linguistics-Volume 1 (pp. 172-176). Association for Computational Linguistics. Tufi, D., Barbu A.M., 3WUDúFX 9 Rotariu G. and Popescu C. 1997. Corpora and Corpus-Based Morpho-Lexical Processing. In Recent Advances in Romanian Language Technology, (pp. 35-56). Romanian Academy Publishing House, ISBN 97327-0626-0. 7XIL, D. 1999. Tiered tagging and combined language models classifiers. In Text, Speech and Dialogue (pp. 843-843). Springer Berlin/Heidelberg. Tufi, D., and Dragomirescu, L. 2004. Tiered tagging revisited. In Proceedings of the 4th LREC Conference (pp. 39-42). 700
2013
68
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 701–709, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Learning to lemmatise Polish noun phrases Adam Radziszewski Institute of Informatics, Wrocław University of Technology Wybrze˙ze Wyspia´nskiego 27 Wrocław, Poland [email protected] Abstract We present a novel approach to noun phrase lemmatisation where the main phase is cast as a tagging problem. The idea draws on the observation that the lemmatisation of almost all Polish noun phrases may be decomposed into transformation of singular words (tokens) that make up each phrase. We perform evaluation, which shows results similar to those obtained earlier by a rule-based system, while our approach allows to separate chunking from lemmatisation. 1 Introduction Lemmatisation of word forms is the task of finding base forms (lemmas) for each token in running text. Typically, it is performed along POS tagging and is considered crucial for many NLP applications. Similar task may be defined for whole noun phrases (Degórski, 2011). By lemmatisation of noun phrases (NPs) we will understand assigning each NP a grammatically correct NP corresponding to the same phrase that could stand as a dictionary entry. The task of NP lemmatisation is rarely considered, although it carries great practical value. For instance, any keyword extraction system that works for a morphologically rich language must deal with lemmatisation of NPs. This is because keywords are often longer phrases (Turney, 2000), while the user would be confused to see inflected forms as system output. Similar situation happens when attempting at terminology extraction from domain corpora: it is usually assumed that domain terms are subclass of NPs (Marciniak and Mykowiecka, 2013). In (1) we give an example Polish noun phrase (‘the main city of the municipality’). Throughout the paper we assume the usage of the tagset of the National Corpus of Polish (Przepiórkowski, 2009), henceforth called NCP in short. The orthographic form (1a) appears in instrumental case, singular. Phrase lemma is given as (1b). Lemmatisation of this phrase consists in reverting case value of the main noun (miasto) as well as its adjective modifier (główne) to nominative (nom). Each form in the example is in singular number (sg), miasto has neuter gender (n), gmina is feminine (f). (1) a. głównym main inst:sg:n miastem city inst:sg:n gminy municipality gen:sg:f b. główne main nom:sg:n miasto city nom:sg:n gminy municipality gen:sg:f According to the lemmatisation principles accompanying the NCP tagset, adjectives are lemmatised as masculine forms (główny), hence it is not sufficient to take word-level lemma nor the orthographic form to obtain phrase lemmatisation. Degórski (2011) discuses some similar cases. He also notes that this is not an easy task and lemma of a whole NP is rarely a concatenation of lemmas of phrase components. It is worth stressing that even the task of word-level lemmatisation is non-trivial for inflectional languages due to a large number of inflected forms and even larger number of syncretisms. According to Przepiórkowski (2007), “a typical Polish adjective may have 11 textually different forms (...) but as many as 70 different tags (2 numbers × 7 cases × 5 genders)”, which indicates the scale of the problem. What is more, several syntactic phenomena typical for Polish complicate NP lemmatisation further. E.g., adjectives may both precede and follow nouns they modify; many English prepositional phrases are realised in Polish using oblique case without any proposition (e.g., there is no standard Polish coun701 terpart for the preposition of as genitive case is used for this purpose). In this paper we present a novel approach to noun phrase lemmatisation where the main phase is cast as a tagging problem and tackled using a method devised for such problems, namely Conditional Random Fields (CRF). 2 Related works NP lemmatisation received very little attention. This situation may be attributed to prevalence of works targeted at English, where the problem is next to trivial due to weak inflection in the language. The only work that contains a complete description and evaluation of an approach to this task we were able to find is the work of Degórski (2011). The approach consists in incorporating phrase lemmatisation rules into a shallow grammar developed for Polish. This is implemented by extending the Spejd shallow parsing framework (Buczy´nski and Przepiórkowski, 2009) with a rule action that is able to generate phrase lemmas. Degórski assumes that lemma of each NP may be obtained by concatenating each token’s orthographic form, lemma or ‘half-lemmatised’ form (e.g. grammatical case normalised to nominative, while leaving feminine gender). The other assumption is to neglect letter case: all phrases are converted to lower case and this is not penalised during evaluation. For development and evaluation, two subsets of NCP were chosen and manually annotated with NP lemmas: development set (112 phrases) and evaluation set (224 phrases). Degórski notes that the selection was not entirely random: two types of NPs were deliberately omitted, namely foreign names and “a few groups for which the proper lemmatisation seemed very unclear”. The final evaluation was performed in two ways. First, it is shown that the output of the entire system intersects only with 58.5% of the test set. The high error rate is attributed to problems with identifying NP boundaries correctly (29.5% of test set was not recognised correctly with respect to phrase boundaries). The other experiment was to limit the evaluation to those NPs whose boundaries were recognised correctly by the grammar (70.5%). This resulted in 82.9% success rate. The task of phrase lemmatisation bears a close resemblance to a more popular task, namely lemmatisation of named entities. Depending on the type of named entities considered, those two may be solved using similar or significantly different methodologies. One approach, which is especially suitable for person names, assumes that nominative forms may be found in the same source as the inflected forms. Hence, the main challenge is to define a similarity metric between named entities (Piskorski et al., 2009; Koco´n and Piasecki, 2012), which can be used to match different mentions of the same names. Other named entity types may be realised as arbitrary noun phrases. This calls for more robust lemmatisation strategies. Piskorski (2005) handles the problem of lemmatisation of Polish named entities of various types by combining specialised gazetteers with lemmatisation rules added to a hand-written grammar. As he notes, organisation names are often built of noun phrases, hence it is important to understand their internal structure. Another interesting observation is that such organisation names are often structurally ambiguous, which is exemplified with the phrase (2a), being a string of items in genitive case (‘of the main library of the Higher School of Economics’). Such cases are easier to solve when having access to a collocation dictionary — it may be inferred that there are two collocations here: Biblioteka Główna and Wy˙zsza Szkoła Handlowa. (2) a. Biblioteki library gen:sg:f Głównej main gen:sg:f Wy˙zszej higher gen:sg:f Szkoły school gen:sg:f Handlowej commercial gen:sg:f b. Biblioteka library nom:sg:f Główna main nom:sg:f Wy˙zszej higher gen:sg:f Szkoły school gen:sg:f Handlowej commercial gen:sg:f While the paper reports detailed figures on named entity recognition performance, the quality of lemmatisation is assessed only for all entity types collectively: “79.6 of the detected NEs were lemmatised correctly” (Piskorski, 2005). 3 Phrase lemmatisation as a tagging problem The idea presented here is directly inspired by Degórski’s observations. First, we will also assume 702 that lemma of any NP may be obtained by concatenating simple transformations of word forms that make up the phrase. As we will show in Sec. 4, this assumption is virtually always satisfied. We will argue that there is a small finite set of inflectional transformations that are sufficient to lemmatise nearly every Polish NP. Consider example (1) again. Correct lemmatisation of the phrase may be obtained by applying a series of simple inflectional transformations to each of its words. The first two words need to be turned into nominative forms, the last one is already lemmatised. This is depicted in (3a). To show the real setting, this time we give full NCP tags and word-level lemmas assigned as a result of tagging. In the NCP tagset, the first part of each tag denotes grammatical class (adj stands for adjective, subst for noun). Adjectives are also specified for degree (pos — positive degree). (3) a. głównym główny adj:sg:inst:n:pos miastem miasto subst:sg:inst:n gminy gmina subst:sg:gen:f b. główne adj:sg:nom:n:pos cas=nom miasto subst:sg:nom:n cas=nom gminy subst:sg:gen:f = Example (3b) consists of three rows: the lemmatised phrase, the desired tags (tags that would be attached to tokens of the lemmatised phrase) and the transformations needed to obtain lemma from orthographic forms. The notation cas=nom means that to obtain the desired form (e.g. główne) you need to find an entry in a morphological dictionary that bears the same word-level lemma as the inflected form (główny) and a tag that results from taking the tag of the inflected form (adj:sg:inst:n:pos) and setting the value of the tagset attribute cas (grammatical case) to the value nom (nominative). The transformation labelled = means that the inflected form is already equal to the desired part of the lemma, hence no transformation is needed. A tagset note is in order. In the NCP tagset each tag may be decomposed into grammatical class and attribute values, where the choice of applicable attributes depends on the grammatical class. For instance, nouns are specified for number, gender and case. This assumption is important for our approach to be able to use simple tag transformations in the form replace the value of attribute A with the new value V (A=V). This is not a serious limitation, since the same assumption holds for most tagsets developed for inflectional languages, e.g., the whole MULTEXT-East family (Erjavec, 2012), Czech tagset (Jakubíˇcek et al., 2011). Our idea is simple: by expressing phrase lemmatisation in terms of word-level transformations we can reduce the task to tagging problem and apply well known Machine Learning techniques that have been devised for solving such problems (e.g. CRF). An important advantage is that this allows to rely not only on the information contained within the phrase to be lemmatised, but also on tokens belonging to its local neighbourhood. Assuming that we have already trained a statistical model, we need to perform the following steps to obtain lemmatisation of a new text: 1. POS tagging, 2. NP chunking, 3. tagging with transformations by applying the trained model, 4. application of transformations to obtain NP lemmas (using a morphological dictionary to generate forms). To train the statistical model, we need training data labelled with such transformations. Probably the most reliable way to obtain such data would be to let annotators manually encode a training corpus with such transformations. However, the task would be extremely tedious and the annotators would probably have to undergo special training (to be able to think in terms of transformations). We decided for a simpler solution. The annotators were given a simpler task of assigning each NP instance a lemma and a heuristic procedure was used to induce transformations by matching the manually annotated lemmas to phrases’ orthographic forms using a morphological dictionary. The details of this procedure are given in the next section. We decided to perform the experiments using the data from Polish Corpus of Wrocław Univer703 sity of Technology1 (Broda et al., 2012). The corpus (abbreviated to KPWr from now on) contains manual shallow syntactic annotation which includes NP chunks and their syntactic heads. The main motivation to use this corpus was its very permissive licence (Creative Commons Attribution), which will not constrain any further use of the tools developed. What is more, it allowed us to release the data annotated manually with phrase lemmas and under the same licence2. One of the assumptions of KPWr annotation is that actual noun phrases and prepositional phrases are labelled collectively as NP chunks. To obtain real noun phrases, phrase-initial prepositions must be stripped off3. For practical reasons we decided to include automatic recognition of phraseinitial prepositions into our model: we introduced a special transformation for such cases (labelled p), having the interpretation that the token belongs to a phrase-initial preposition and should be discarded when generating phrase lemma. Prepositions are usually contained in single tokens. There are some cases of multi-word units which we treat as prepositions (secondary prepositions), e.g. ze wzgl˛edu na (with respect to). This solution allows to use our lemmatiser directly against chunker output to obtain NP lemmas from both NPs and PPs. For instance, the phrase o przenoszeniu bakterii drog ˛a płciow ˛a (about sexual transmission of bacteria) should be lemmatised to przenoszenie bakterii drog ˛a płciow ˛a (sexual transmission of bacteria). 4 Preparation of training data First, simple lemmatisation guidelines were developed. The default strategy is to normalise the case to nominative and the number to singular. If the phrase was in fact prepositional, phrase-initial preposition should be removed first. If changing the number would alter semantics of the phrase, it should be left plural (e.g., warunki ‘conditions’ as in terms and conditions). Some additional exceptions concern pronouns, fixed expressions and 1We used version 1.1 downloaded from http://www. nlp.pwr.wroc.pl/kpwr. 2The whole dataset described in this paper is available at http://nlp.pwr.wroc.pl/en/static/ kpwr-lemma. 3Note that if we decided to use the data from NCP, we would still have to face this issue. Although an explicit distinctions is made between NPs and PPs, NPs are not annotated as separate chunks when belonging to a PP chunk (an assumption which is typical for shallow parsing). proper names. They were introduced to obtain lemmas that are practically most useful. A subset of documents from KPWr corpus was drawn randomly. Each NP/PP belonging to this subset was annotated manually. Contrary to (Degórski, 2011), we made no exclusions, so the obtained set contains some foreign names and a number of cases which were hard to lemmatise manually. Among the latter there was one group we found particularly interesting. It consisted of items following the following pattern: NP in plural modified by another NP or PP in plural. For many cases it was hard to decide if both parts were to be reverted to singular, only the main one or perhaps both of them should be left in plural. We present two such cases in (4a) and (4b). For instance, (4b) could be lemmatised as opis tytułu z Wikipedii (description of a Wikipedia title), but it was not obvious if it was better than leaving the whole phrase as is. (4) a. obawy ze strony autorów ‘concerns on the part of the authors’ b. opisy tytułów z Wikipedii ‘descriptions of the Wikipedia titles’ Altogether, the annotated documents contain 1669 phrases. We used the same implementation of the 2+1 model which was used to annotate morphosyntax in NCP (Przepiórkowski and Szałkiewicz, 2012): two annotators performed the task independently, after which their decisions were compared and the discrepancies were highlighted. The annotators were given a chance to rethink their decisions concerning the highlighted phrases. Both annotators were only told which phrases were lemmatised differently by the other party but they didn’t know the other decision. The purpose of this stage was to correct obvious mistakes. Their output was finally compared, resulting in 94% phrases labelled identically (90% before reconsidering decisions). The remaining discrepancies were decided by a superannotator. The whole set was divided randomly into the development set (1105 NPs) and evaluation set (564 NPs). The development set was enhanced with wordlevel transformations that were induced automatically in the following manner. The procedure assumes the usage of a morphological dictionary extracted from Morfeusz SGJP analyser4 (Woli´nski, 4morfeusz-SGJP-src-20110416 package 704 2006). The dictionary is stored as a set of (orthographic form, word-level lemma, tag). The procedure starts with tokenisation of the manually assigned lemma. Next, a heuristic identification of phrase-initial preposition is performed. The assumption is that, having cut the preposition, all the remaining tokens of the original inflected phrase must be matched 1:1 to corresponding tokens from the human-assigned lemma. If any match problem did occur, an error was reported and such a case was examined manually. The only problems encountered were due to proper names unknown to the dictionary and misspelled phrases (altogether about 10 cases). Those cases were dealt with manually. Also, all the extracted phrase-initial prepositions were examined and no controversy was found. The input and output to the matching procedure is illustrated in Fig. 1. The core matching happens at token level. The task is to find a suitable transformation for the given inflected form from the original phrase, its tag and word-level lemma, but also given the desired form being part of human-assigned lemma. If the inflected form is identical to the desired human-assigned lemma, the ‘=’ transformation is returned without any tag analysis. For other cases the morphological dictionary is required. For instance, the inflected form tej tagged as adj:sg:loc:f:pos should be matched to the human-assigned form ta (the row label H lem). The first subtask is to find all entries in the morphological dictionary with the orthographic form equal to human-assigned lemma (ta), the word-level lemma equal to the lemma assigned by the tagger (ten) and having a tag with the same grammatical class as the tagger has it (adj; we deliberately disallow transformations changing the grammatical class). The result is a set of entries with the given lemma and orthographic form, but with different tags attached. For the example considered, two tags may be obtained: adj:sg:nom:f:pos and adj:sg:voc:f:pos (the former is in nominative case, the latter — in vocative). Each of the obtained tags is compared to the tag attached to the inflected forms (adj:sg:loc:f:pos) and this way candidate transformations are generated (cas=nom and cas=voc here). The transformations are heuristically ranked. Most importantly, obtained from http://sgjp.pl/morfeusz/ dopobrania.html. The package is available under 2-clause BSD licence. cas=nom is always preferred, then nmb=sg (enforcing singular number), then transforming the gender to different values, preferably to masculine inanimate (gnd=m3). The lowest possible ranking is given to a transformation enforcing case value other than nominative. Original: przy tej drodze T tags: prep: adj: subst: loc sg:loc:f:pos sg:loc:f T lem: przy ten droga H lem: ta droga Transf.: p cas=nom cas=nom Figure 1: Matching of an NP and its lemma. The first row shows the original inflected form. The next three present tagger output: tags (split into two rows) and lemmas. H lem stands for the lemma assigned by a human. Last row presents the transformations induced. We are fully aware of limitations of this approach. This ranking was inspired only by intuition obtained from the lemmatisation guidelines and the transformations selected this way may be wrong in a number of cases. While many transformations may lead to obtaining the same lemma for a given form, many of them will still be accidental. Different syncretisms may apply to different lexemes, which can negatively impact the ability of the model to generalise from one phrase to other. On the other hand, manual inspection of some fragments suggest that the transformations inferred are rarely unjustified. The frequencies of all transformations induced from the development set are given in Tab. 1. Note that the first five most frequent transformation make up 98.7% of all cases. These findings support our hypothesis that a small finite set of transformations is sufficient to express lemmatisation of nearly every Polish NP. We have also tested an alternative variant of the matching procedure that included additional transformation ‘lem’ with the meaning take the word-level lemma assigned by the tagger as the correct lemmatisation. This transformation could be induced after an unsuccessful attempt to induce the ‘=’ transformation (i.e., if the correct humanassigned lemmatisation was not identical to orthographic form). This resulted in replacing a number of tag-level transformations (mostly cas=nom) with the simple ‘lem’. The advantage of this vari705 = 2444 72% cas=nom 434 13% p 292 9% nmb=sg 97 3% cas=nom,nmb=sg 76 2% gnd=m3 9 cas=nom,gnd=m3,nmb=sg 7 gnd=m3,nmb=sg 6 acn,cas=nom 5 acm=rec,cas=nom 3 cas=gen 3 cas=nom,gnd=m3 3 cas=nom,gnd=m1 2 gnd=f,nmb=sg 2 cas=nom,gnd=f 1 cas=nom,gnd=f,nmb=sg 1 cas=nom,nmb=pl 1 cas=nom,nmb=sg,gnd=m3 1 Total 3387 100% Table 1: Frequencies of transformations. ant is that application of this transformation does not require resorting to the dictionary. The disadvantage is that it is likely to worsen the generalising power of the model. 5 CRF and features The choice of CRF for sequence labelling was mainly influenced by its successful application to chunking of Polish (Radziszewski and Pawlaczek, 2012). The work describes a feature set proposed for this task, which includes word forms in a local window, values of grammatical class, gender, number and case, tests for agreement on number, gender and case, as well as simple tests for letter case. We took this feature set as a starting point. Then we performed some experiments with feature generation and selection. For this purpose the development set was split into training and testing part. The most obvious, yet most successful change was to introduce features returning the chunk tag assigned to a token. As KPWr also contains information on the location of chunks’ syntactic heads and this information is also output by the chunker, we could also use this in our features. Another improvement resulted from completely removing tests for grammatical gender and limiting the employed tests for number to the current token. The final feature set includes the following items: • the word forms (turned lower-case) of tokens occupying a local window (−2, . . . , +2), • word form bigrams: (−1, 0) and (0, 1), • chunk tags (IOB2 tags concatenated with Boolean value denoting whether the syntactic head is placed at the position), for a local window (−1, 0, +1) • chunk tags (IOB2 tags only) for positions −2 and +2, and two chunk tag bigrams: (−1, 0) and (0, 1), • grammatical class of tokens in the window (−2, . . . , +2), • grammatical class for the focus token (0) concatenated with the last character of the wordform, • values of grammatical case for tokens (−2, −1, +1, +2), • grammatical class of the focus token concatenated with its gender value, • 2-letter prefix of the word form (lowercased), • tests for agreements and letter case as in (Radziszewski and Pawlaczek, 2012). 6 Evaluation The performed evaluation assumed training of the CRF on the whole development set annotated with the induced transformations and then applying the trained model to tag the evaluation part with transformations. Transformations were then applied and the obtained phrase lemmas were compared to the reference annotation. This procedure includes the influence of deficiencies of the morphological dictionary. The version of KPWr used here was tagged automatically using the WCRFT tagger (Radziszewski, 2013), hence tagging errors are also included. Degórski (2011) reports separate figures for the performance of the entire system (chunker + NP lemmatiser) on the whole test set and performance of the entire system limiting the test set only to those phrases that the system is able to chunk correctly (i.e., to output correct phrase boundaries). Such a choice is reasonable given that his system 706 is based on rules that intermingle chunking with lemmatisation. We cannot expect the system to lemmatise correctly those groups which it is unable to capture. Our approach assumes two-stage operation, where the chunker stage is partially independent from the lemmatisation. This is why we decided to report performance of the whole system on the whole test set, but also, performance of the lemmatisation module alone on the whole test set. This seems more appropriate, since the chunker may be improved or completely replaced independently, while discarding the phrases that are too hard to parse is likely to bias the evaluation of the lemmatisation stage (what is hard to chunk is probably also hard to lemmatise). For the setting where chunker was used, we used the CRF-based chunker mentioned in the previous section (Radziszewski and Pawlaczek, 2012). The chunker has been trained on the entire KPWr except for the documents that belong to the evaluation set. Degórski (2011) uses concatenation of wordlevel base forms assigned by the tagger as a baseline. Observation of the development set suggests that returning the original inflected NPs may be a better baseline. We tested both variants. As detection of phrase-initial prepositions is a part of our task formulation, we had to implement it in the baseline algorithms as well. Otherwise, the comparison would be unfair. We decided to implement both baseline algorithms using the same CRF model but trained on fabricated data. The training data for the ‘take-orthographic-form’ baseline was obtained by leaving the ‘remove-phrase-initialpreposition’ (‘p’) transformation and replacing all others with ‘=’. Similarly, for the ‘take-lemma’ baseline, other transformations were substituted with ‘lem’. The results of the full evaluation are presented in Tab. 2. The first conclusion is that the figures are disappointingly low, but comparable with the 58.5% success rate reported in (Degórski, 2011). The other observation is that the proposed solution significantly outperforms both baseline, out of which the ‘take-orthographic-form’ (orth baseline) performs slightly better. Also, it turns out that the variation of the matching procedure using the ‘lem’ transformation (row labelled CRF lem) performs slightly worse than the procedure without this transformation (row CRF nolem). This supports the suspicion that relying on wordlevel lemmas may reduce the ability to generalise. Algorithm Prec. Recall F CRF nolem 55.1% 56.9% 56.0% CRF lem 53.7% 55.5% 54.6% orth baseline 38.6% 39.9% 39.2% lem baseline 36.2% 37.4% 36.8% Table 2: Performance of NP lemmatisation including chunking errors. Results corresponding to performance of the lemmatisation module alone are reported in Tab. 3. The test has been performed using chunk boundaries and locations of syntactic heads taken from the reference corpus. In this settings recall and precision have the same interpretation, hence we simply refer to the value as accuracy (percentage of chunks that were lemmatised correctly). The figures are considerably higher than those reported in Tab. 2, which shows the huge impact of chunking errors. It is worth noting that the best accuracy achieved is only slightly lower than that achieved by Degórski (82.9%), while our task is harder. As mentioned above, in Degórski’s setting, the phrases that are too hard to parse are excluded from the test set. Those phrases are also likely to be hard cases for lemmatisation. The other important difference stems from phrase definitions used in both corpora; NPs in NCP are generally shorter than the chunks allowed in KPWr. Most notably, KPWr allows the inclusion of PP modifiers within NP chunks (Broda et al., 2012). It seems likely that the proposed algorithm would performed better when trained on data from NCP which assumes simpler NP definition. Note that the complex NP definition in KPWr also explains the huge gap between results of lemmatisation alone and lemmatisation including chunking errors. Algorithm Correct lemmas Accuracy CRF nolem 455 / 564 80.7% CRF lem 444 / 564 78.7% orth baseline 314 / 564 55.7% lem baseline 290 / 564 51.4% Table 3: Performance of NP lemmatisation alone. We also checked the extent to which the entries unknown to the morphological dictionary could lower the performance of lemmatisation. It turned out that only 8 words couldn’t be transformed during evaluation due to lack of the entries that 707 were sought in the morphological dictionary, out of which 5 were anyway handled correctly in the end by using the simple heuristic to output the ‘=’ transformation when everything else fails. A rudimentary analysis of lemmatiser output indicates that the most common error is the assignment of the orthographic form as phrase lemma where something else was expected. This seems to concern mostly many NPs that are left in plural, even simple ones (e.g. audycje telewizyjne ‘TV programmes’), but there are also some cases of personal pronouns left in oblique case (was ‘youpl-accusative/genitive’). It seems that a part of these cases come from tagging errors (even if the correct transformation is obtained, the results of its application depend on the tag and lemma attached to the inflected form by the tagger). Not surprisingly, proper names are hard cases for the model (e.g. Pod Napi˛eciem was lemmatised to napi˛ecie, which would be correct weren’t it a title). 7 Conclusions and further work We presented a novel approach to lemmatisation of Polish noun phrases. The main advantage of this solution is that it allows to separate the lemmatisation phrase from the chunking phrase. Degórski’s rule-based approach (Degórski, 2011) was also built on top of an existing parser but, as he notes, to improve the lemmatisation accuracy, the grammar underlying the parser should actually be rewritten with lemmatisation in mind. The other advantage of the approach presented here is that it is able to learn from a corpus containing manually assigned phrase lemmas. Extending existing chunk-annotated corpora with phrase lemmas corresponds to a relatively simple annotation task. The performance figures obtained by our algorithm are comparable with that of Degórski’s grammar, while the conditions under which our system was evaluated were arguably less favourable. To enable a better comparison it would be desirable to evaluate our approach against the phrases from NCP. The main disadvantage of the approach lies in the data preparation stage. It requires some semimanual work to obtain labelling with transformations, which is language- and tagset-dependent. A very interesting alternative has been suggested by an anonymous reviewer: instead of considering tag-level transformations that require an exhaustive morphological dictionary, it would be simpler to rely entirely on string-to-string transformations that map inflected forms to their expected counterparts. Such transformations may be expressed in terms of simple edit scripts, which has already been successfully applied to word-level lemmatisation of Polish and other languages (Chrupała et al., 2008). This way, the training data labelled with transformations could be obtained automatically. What is more, application of such transformations also does not depend on the dictionary. It is not obvious how this would affect the performance of the module and, hence, needs to be evaluated. We plan this as our further work. Also, it would be worthwhile to evaluate the presented solution for other Slavic languages. Acknowledgments This work was financed by Innovative Economy Programme project POIG.01.01.02-14-013/09. References Robert Bembenik, Łukasz Skonieczny, Henryk Rybi´nski, Marzena Kryszkiewicz, and Marek Niezgódka, editors. 2013. Intelligent Tools for Building a Scientific Information Platform, volume 467 of Studies in Computational Intelligence. Springer Berlin Heidelberg. Bartosz Broda, Michał Marci´nczuk, Marek Maziarz, Adam Radziszewski, and Adam Wardy´nski. 2012. KPWr: Towards a free corpus of Polish. In Nicoletta Calzolari, Khalid Choukri, Thierry Declerck, Mehmet U˘gur Do˘gan, Bente Maegaard, Joseph Mariani, Jan Odijk, and Stelios Piperidis, editors, Proceedings of LREC’12, Istanbul, Turkey. ELRA. Aleksander Buczy´nski and Adam Przepiórkowski. 2009. Human language technology. challenges of the information society. chapter Spejd: A Shallow Processing and Morphological Disambiguation Tool, pages 131–141. Springer-Verlag, Berlin, Heidelberg. Grzegorz Chrupała, Georgiana Dinu, and Josef van Genabith. 2008. Learning morphology with Morfette. In Nicoletta Calzolari, Khalid Choukri, Bente Maegaard, Joseph Mariani, Jan Odijk, Stelios Piperidis, and Daniel Tapias, editors, Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC’08), Marrakech, Morocco, may. European Language Resources Association (ELRA). Łukasz Degórski. 2011. Towards the lemmatisation of Polish nominal syntactic groups using a shallow 708 grammar. In Pascal Bouvry, Mieczysław A. Kłopotek, Franck Leprevost, Małgorzata Marciniak, Agnieszka Mykowiecka, and Henryk Rybi´nski, editors, Security and Intelligent Information Systems: International Joint Conference, SIIS 2011, Warsaw, Poland, June 13-14, 2011, Revised Selected Papers, volume 7053 of Lecture Notes in Computer Science, pages 370–378. Springer-Verlag. Tomaž Erjavec. 2012. MULTEXT-East: morphosyntactic resources for Central and Eastern European languages. Language Resources and Evaluation, 46(1):131–142. Miloš Jakubíˇcek, Vojtˇech Kováˇr, and Pavel Šmerk. 2011. Czech morphological tagset revisited. In Proceedings of Recent Advances in Slavonic Natural Language Processing, pages 29–42, Brno. Jan Koco´n and Maciej Piasecki. 2012. Heterogeneous named entity similarity function. In Petr Sojka, Aleš Horák, Ivan Kopeˇcek, and Karel Pala, editors, Text, Speech and Dialogue, volume 7499 of Lecture Notes in Computer Science, pages 223–231. Springer Berlin Heidelberg. Małgorzata Marciniak and Agnieszka Mykowiecka. 2013. Terminology extraction from domain texts in Polish. In Bembenik et al. (Bembenik et al., 2013), pages 171–185. Jakub Piskorski, Karol Wieloch, and Marcin Sydow. 2009. On knowledge-poor methods for person name matching and lemmatization for highly inflectional languages. Information Retrieval, 12(3):275–299. Jakub Piskorski. 2005. Named-entity recognition for Polish with SProUT. In Leonard Bolc, Zbigniew Michalewicz, and Toyoaki Nishida, editors, Intelligent Media Technology for Communicative Intelligence, volume 3490 of Lecture Notes in Computer Science, pages 122–133. Springer Berlin Heidelberg. Adam Przepiórkowski. 2007. Slavic information extraction and partial parsing. In Proceedings of the Workshop on Balto-Slavonic Natural Language Processing, pages 1–10, Praga, Czechy, June. Association for Computational Linguistics. Adam Przepiórkowski. 2009. A comparison of two morphosyntactic tagsets of Polish. In Violetta Koseska-Toszewa, Ludmila Dimitrova, and Roman Roszko, editors, Representing Semantics in Digital Lexicography: Proceedings of MONDILEX Fourth Open Workshop, pages 138–144, Warszawa. Adam Przepiórkowski and Łukasz Szałkiewicz. 2012. Anotacja morfoskładniowa. In Adam Przepiórkowski, Mirosław Ba´nko, Rafał L. Górski, and Barbara Lewandowska-Tomaszczyk, editors, Narodowy Korpus J˛ezyka Polskiego. Wydawnictwo Naukowe PWN, Warsaw. Adam Radziszewski and Adam Pawlaczek. 2012. Large-scale experiments with NP chunking of Polish. In Proceedings of the 15th International Conference on Text, Speech and Dialogue, Brno, Czech Republic. Springer Verlag. Adam Radziszewski. 2013. A tiered CRF tagger for Polish. In Bembenik et al. (Bembenik et al., 2013), pages 215–230. Peter Turney. 2000. Learning algorithms for keyphrase extraction. Information Retrieval, 2:303–336. Marcin Woli´nski. 2006. Morfeusz — a practical tool for the morphological analysis of Polish. In Mieczysław A. Kłopotek, Sławomir T. Wierzcho´n, and Krzysztof Trojanowski, editors, Proceedings of IIPWM’06, pages 511–520, Ustro´n, Poland, June 19–22. Springer-Verlag, Berlin. 709
2013
69
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 64–72, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Plurality, Negation, and Quantification: Towards Comprehensive Quantifier Scope Disambiguation Mehdi Manshadi, Daniel Gildea, and James Allen University of Rochester 734 Computer Studies Building Rochester, NY 14627 mehdih,gildea,[email protected] Abstract Recent work on statistical quantifier scope disambiguation (QSD) has improved upon earlier work by scoping an arbitrary number and type of noun phrases. No corpusbased method, however, has yet addressed QSD when incorporating the implicit universal of plurals and/or operators such as negation. In this paper we report early, though promising, results for automatic QSD when handling both phenomena. We also present a general model for learning to build partial orders from a set of pairwise preferences. We give an n log n algorithm for finding a guaranteed approximation of the optimal solution, which works very well in practice. Finally, we significantly improve the performance of the previous model using a rich set of automatically generated features. 1 Introduction The sentence there is one faculty member in every graduate committee is ambiguous with respect to quantifier scoping, since there are at least two possible readings: If one has wide scope, there is a unique faculty member on every committee. If every has wide scope, there can be different faculty members on each committee. Over the past decade there has been some work on statistical quantifier scope disambiguation (QSD) (Higgins and Sadock, 2003; Galen and MacCartney, 2004; Manshadi and Allen, 2011a). However, the extent of the work has been quite limited for several reasons. First, in the past two decades, the main focus of the NLP community has been on shallow text processing. As a deep processing task, QSD is not essential for many NLP applications that do not require deep understanding. Second, there has been a lack of comprehensive scope-disambiguated corpora, resulting in the lack of work on extensive statistical QSD. Third, QSD has often been considered only in the context of explicit quantification such as each and every versus some and a/an. These co-occurrences do not happen very often in real-life data. For example, Higgins and Sadock (2003) find fewer than 1000 sentences with two or more explicit quantifiers in the Wall Street journal section of Penn Treebank. Furthermore, for more than 60% of those sentences, the order of the quantifiers does not matter, either as a result of the logical equivalence (as in two existentials), or because they do not have any scope interaction. Having said that, with deep language processing receiving more attention in recent years, QSD is becoming a real-life issue.1 At the same time, new scope-disambiguated corpora have become available (Manshadi et al., 2011b). In this paper, we aim at tackling the third issue mentioned above. We push statistical QSD beyond explicit quantification, and address an interesting, yet practically important, problem in QSD: plurality and quantification. In spite of an extensive literature in theoretical semantics (Hamm and Hinrichs, 2010; Landmann, 2000), this topic has not been well investigated in computational linguistics. To illustrate the phenomenon, consider (1): 1. Three words start with a capital letter. A deep understanding of this sentence, requires deciding whether each word in the set, referred to by Three words, starts with a potentially distinct capital letter (as in Apple, Orange, Banana) or there is a unique capital letter which each word starts with (as in Apple, Adam, Athens). By treating the NP Three words as a single atomic entity, earlier work on automatic QSD has overlooked this problem. In general, every plural NP potentially introduces an implicit universal, ranging 1For example, Liang et al. (2011) in their state-of-the-art statistical semantic parser within the domain of natural language queries to databases, explicitly devise quantifier scoping in the semantic model. 64 over the collection of entities introduced by the plural.2 Scoping this implicit universal is just as important. While explicit universals may not occur very often in natural language, the usage of plurals is very common. Plurals form 18% of the NPs in our corpus and 20% of the nouns in Penn Treebank. Explicit universals, on the other hand, form less than 1% of the determiners in Penn Treebank. Quantifiers are also affected by negation. Previous work (e.g., Morante and Blanco, 2012) has investigated automatically detecting the scope and focus of negation. However, the scope of negation with respect to quantifiers is a different phenomenon. Consider the following sentence. 2. The word does not start with a capital letter. Transforming this sentence into a meaning representation language, for almost any practical purposes, requires deciding whether the NP a capital letter lies in the scope of the negation or outside of it. The former describes the preferred reading where The word starts with a lowercase letter as in apple, orange, banana, but the latter gives the unlikely reading, according to which there exists a particular capital letter, say A, that The word starts with, as in apple, Orange, Banana. By not involving negation in quantifier scoping, a semantic parser may produce an unintended interpretation. Previous work on statistical QSD has been quite restricted. Higgins and Sadock (2003), which we refer to as HS03, developed the first statistical QSD system for English. Their system disambiguates the scope of exactly two explicitly quantified NPs in a sentence, ignoring indefinite a/an, definites and bare NPs. Manshadi and Allen (2011a), hence MA11, go beyond those limitations and scope an arbitrary number of NPs in a sentence with no restriction on the type of quantification. However, although their corpus annotates the scope of negations and the implicit universal of plurals, their QSD system does not handle those. As a step towards comprehensive automatic QSD, in this paper we present our work on automatic scoping of the implicit universal of plurals and negations. For data, we use a new revision of MA11’s corpus, first introduced in Manshadi et al. (2011b). The new revision, called QuanText, carries a more detailed, fine-grained scope annotation (Manshadi et al., 2012). The performance of 2Although plurals carry different types of quantification (Herbelot and Copestake, 2010), almost always there exists an implicit universal. The importance of scoping this universal, however, may vary based on the type of quantification. our model defines a baseline for future efforts on (comprehensive) QSD over QuanText. In addition to addressing plurality and negation, this work improves upon MA11’s in two directions. • We theoretically justify MA11’s ternaryclassification approach, formulating it as a general framework for learning to build partial orders. An n log n algorithm is then given to find a guaranteed approximation within a fixed ratio of the optimal solution from a set of pairwise preferences (Sect. 3.1). • We replace MA11’s hand-annotated features with a set of automatically generated linguistic features. Our rich set of features significantly improves the performance of the QSD model, even though we give up the goldstandard dependency features (Sect. 3.3). 2 Task definition In QuanText, scope-bearing elements (or, as we call them, scopal terms) of each sentence have been identified using labeled chunks, as in (3). 3. Replace [1/ every line] in [2/ the file] ending in [3/ punctuation] with [4/ a blank line] . NP chunks follow the definition of baseNP (Ramshaw and Marcus, 1995) and hence are flat. Outscoping relations are used to specify the relative scope of scopal terms. The relation i > j means that chunk i outscopes (or has wide scope over) chunk j. Equivalently, chunk j is said to have narrow scope with respect to i. Each sentence is annotated with its most preferred scoping (according to the annotators’ judgement), represented as a partial order: 4. SI : (2 > 1 > 4; 1 > 3) If neither i > j nor j > i is entailed from the scoping, i and j are incomparable. This happens if both orders are equivalent (as in two existentials) or when the two chunks have no scope interaction. Since a partial order can be represented by a Directed Acyclic Graph (DAG), we use DAGs to represent scopings. For example, G1 in Figure 1 represents the scoping in (4). 2.1 Evaluation metrics Given the gold standard DAG Gg = (V, Eg) and the predicted DAG Gp = (V, Ep), a similarity measure may be defined based on the ratio of the number of pairs (of nodes) labeled correctly to the 65 2 1 3 4 (a) G1 2 1 3 4 (b) G+ 1 2 1 4 3 (c) G2 2 1 3 4 (d) G3 Figure 1: Scoping as DAG total number of pairs. In order to take the transitivity of outscoping relations into account, we use the transitive closure (TC) of DAGs. Let G+ = (V, E+) represent the TC of a DAG G = (V, E).3 G1 and G+ 1 in Figure 1 illustrate this concept. We now define the similiarty metric S+ as follows: σ+ = |E+ p ∩E+ g | ∪| ¯ E+ p ∩¯ E+ g | |V |(|V | −1)/2 (1) in which ¯G = (V, ¯E) is the complement of the underlying undirected version of G. HS03 and others have used such a similarity measure for evaluation purposes. A disadvantage of this metric is that it gives the same weight to outscoping and incomparability relations. In practice, if two scopal terms with equivalent ordering (and hence, no outscoping relation) are incorrectly labeled with an outscoping, the logical form still remains valid. But if an outscoping relation is mislabeled, it will change the interpretation of the sentence. Therefore, in MA11, we suggest defining a precision/recall based on the number of outscoping relations recovered correctly: 4 P + = |E+ p ∩E+ g | |E+ p | , R+ = |E+ p ∩E+ g | |E+ g | (2) 3 (u, v) ∈G+ ⇐⇒((u, v)∈G ∨ ∃w1 . . . wn ∈V, (u, w1) . . . (wn, v) ∈E ) 4MA11 argues that TC-based metrics tend to produce higher numbers. For example if G3 in Figure 1 is a goldstandard DAG and G1 is a candidate DAG, TC-based metrics count 2>3 as another match, even though it is entailed from 2 > 1 and 1 > 3. They give an alternative metric based on transitive reduction (TR), obtained by removing all the redundant edges of a DAG. TR-based metrics, however, have their own disadvantage. For example, if G2 is another candidate for G3, TR-based metrics produce the same numbers for both G1 and G2, even though G1 is clearly closer to G3 than G2. Therefore, in this paper we stick to TC-based metrics. 3 Our framework 3.1 Learning to do QSD Since we defined QSD as a partial ordering, automatic QSD would become the problem of learning to build partial orders. The machine learning community has studied the problem of learning total orders (ranking) in depth (Cohen et al., 1999; Furnkranz and Hullermeier, 2003; Hullermeier et al., 2008). Many ranking systems create partial orders as output when the confidence level for the relative order of two objects is below some threshold. However, the target being a partial order is a fundamentally different problem. While the lack of order between two elements is interpreted as the lack of confidence in the former, it should be interpreted as incomparability in the latter. Learning to build partial orders has not attracted much attention in the learning community, although as seen shortly, the techniques developed for ranking can be adopted for learning to build partial orders. As mentioned before, a partial order P can be represented by a DAG G, with a preceding b in P if and only if a reaches b in G by a directed path. Although there could be many DAGs representing a partial order P, only one of those is a transitive DAG.5 Therefore, in order to have a one-to-one relationship between QSDs and DAGs, we only consider the class of transitive DAGs, or TDAG. Every non-transitive DAG will be converted into its transitive counterpart by taking its transitive closure (as shown in Figure 1). Consider V , a set of nodes and a TDAG G = (V, E). It would help to think of disconnected nodes u, v of G, as connected with a null edge ϵ. We define the labeling function δG : V × V −→ {+, −, ϵ} assigning one of the three labels to each pair of nodes in G: δG(u, v) =    + (u, v) ∈G − (v, u) ∈G ϵ otherwise (3) Given the true TDAG ˆG = (V, ˆE), and a candidate TDAG G, we define the Loss function to be the total number of incorrect edges: L(G, ˆG) = X u≺v∈V I(δG(u, v) ̸= δ ˆG(u, v)) (4) in which ≺is an arbitrary total order over the nodes in V 6, and I(·) is the indicator function. We 5G is transitive iff (u, v), (v, w) ∈G =⇒(u, w) ∈G. 6E.g., the left-to-right order of the corresponding chunks in the sentence. 66 adopt a minimum Bayes risk (MBR) approach, with the goal of finding the graph with the lowest expected loss against the (unknown) target graph: G∗= argmin G∈TDAG E ˆG h L(G, ˆG) i (5) Substituting in the definition of the loss function and exchanging the order of the expectation and summation, we get: G∗= argmin G∈TDAG X u≺v∈V E ˆG  I(δG(u, v) ̸= δ ˆG(u, v)  = argmin G∈TDAG X u≺v∈V P(δG(u, v) ̸= δ ˆG(u, v)) (6) This means that in order to solve Eq. (5), we need only the probabilities of each of the three labels for each of the C(n, 2) = n(n −1)/2 pairs of nodes7 in the graph, rather than a probability for each of the superexponentially many possible graphs. We train a classifier to estimate these probabilities directly for a given pair. Therefore, we have reduced the problem of predicting a partial order to pairwise comparison, analogous to ranking by pairwise comparison or RPC (Hullermeier et al., 2008; Furnkranz and Hullermeier, 2003), a popular technique in learning total orders. The difference though is that in RPC, the comparison is a (soft) binary classification, while for partial orders we have the case of incomparability (the label ϵ), hence a (soft) ternary classification. A soft ternary classifier generates three probabilities, pu,v(+), pu,v(−), and pu,v(ϵ) for each pair (u, v),8 corresponding to the three labels. Hence, equation Eq. (6) can be rearranged as follows: G∗= argmax G∈TDAG X u≺v∈V pu,v(δG(u, v)) (7) Let Γp be a graph like the one in Figure 2, containing exactly three edges between every two nodes, weighted by the probabilities from the n(n −1)/2 classifiers. We call Γp the preference graph. Intuitively speaking, the solution to Eq. (7) is the transitive directed acyclic subgraph of Γp that has the maximum sum of weights. Unfortunately finding this subgraph is an NP-hard problem.9 7Throughout this subsection, unless otherwise specified, by a pair of nodes we mean a pair (u, v) with u≺v. 8pv,u for u≺v is defined in the obvious way: pv,u(+) = pu,v(−), pv,u(−) = pu,v(+), and pv,u(ϵ) = pu,v(ϵ). 9 The proof is beyond the scope of this paper, but the idea is similar to that of Cohen et al. (1999), on finding total orders. Although they don’t use an RPC technique, Cohen et 3 2 0.5 1 0.1 0.8 0.2 0.3 0.1 0.3 0.1 0.6 Figure 2: A preference graph over three nodes. 1. Let Γp be the preference graph and set G to ∅. 2. ∀u ∈V , let π(u) = P v pu,v(+)−P v pu,v(−). 3. Let u∗= argmaxu π(u), S−= P v∈G pv,u∗(−) & Sϵ = P v∈G pv,u∗(ϵ). 4. Remove u∗and all its incident edges from Γp. 5. Add u∗to G; also if S− > Sϵ, for every v ∈G −u∗, add (v, u∗) to G. 6. If Γp is empty, output G, otherwise repeat steps 2-5. Figure 3: An approximation algorithm for Eq. (7) Since it is very unlikely to find an efficient algorithm to solve Eq. (7), instead, we propose the algorithm in Figure 3 which finds an approximate solution. The idea of the algorithm is simple. By finding u∗with the highest π(u) in step 3, we form a topological order for the nodes in G in a greedy way (see Footnote 9). We then add u∗to G. A directed edge is added either from every node in G−u∗to u∗or from no node, depending on which case makes the sum of the weights in G higher. Theorem 1 The algorithm in Figure 3 is a 1/3OPT approximation algorithm for Eq. (7). Proof idea. First of all, note that G is a TDAG, because edges are only added to the most recently created node in step 5. Let OPT be the optimum value of the right hand side of Eq. (7). The sum of all the weights in Γp is an upper bound for OPT: X u≺v∈V X λ∈{+,−,ϵ} pu,v(λ) ≥OPT Step 5 of the algorithm guarantees that the labels δG(u, v) satisfy: X u≺v∈V pu,v(δG(u, v)) ≥ X u≺v∈V pu,v(λ) (8) al. (1999) encounter a similar optimization problem. They propose an approximation algorithm which finds the solution (a total order) in a greedy way. Here we use the same greedy technique to find a total order, but take it only as the topological order of the solution (Figure 3). 67 for any λ ∈{+, −, ϵ}. Hence: X u≺v∈V pu,v(δG(u, v))= 1 3 3 X u≺v∈V pu,v(δG(u, v)) ! ≥1 3 X u≺v∈V X λ∈{+,−,ϵ} pu,v(λ) ≥1 3OPT In practice, we improve the algorithm in Figure 3, while maintaining the approximation guarantee, as follows. When adding a node u∗to graph G, we do not make a binary decision as to whether connect every node in G to u∗or none, but we use some heuristics to choose a subset of nodes (possibly empty) in G that if connected to u∗results in a TDAG whose sum of weights is at least as big as the binary none-vs-all case. As described in Sec. 4, the algorithm works very well in our QSD system, finding the optimum solution in virtually all cases we examined. 3.2 Dealing with plurality and negation Consider the following sentence with the plural NP chunk the lines. 5. Merge [1p/ the lines], ending in [2/ a punctuation], with [3/ the next non-blank line]. 6. SI : (1c > 1d > 2; 1d > 3) 10 In QuanText, plural chunks are indexed with a number followed by the lowercase letter “p”. As seen in (6), the scoping looks different from before in that the terms 1d and 1c are not the label of any chunk. These two terms refer to the two quantified terms introduced by the plural chunk 1p: 1c (for collection) represents the set (or in better words collection) of entities, defined by the plural, and 1d (for distribution) refers to the implicit universal, introduced by the plural. In other words, for a plural chunk ip, id represents the universally quantified entity over the collection ic. The outscoping relation 1d > 2 in (6) states that every line in the collection, denoted by 1c, starts with its own punctuation character. Similarly, 1d > 3 indicates that every line has its own next non-blank line. Figure 4(a) shows a DAG for the scoping in (6). In (7) we have a sentence containing a negation. In QuanText, negation chunks are labeled with an uppercase “N” followed by a number. 10This scoping corresponds to the logical formula: Dx1c, Collection(x1c) ∧∀x1d, In(x1d, x1c) ⇒ (Line(x1d)∧(∃x2, Punctuation(x2)∧EndIn(x1d, x2))∧ (Dx3, ¬blank(x3) ∧next(x1d, x3) ∧merge(x1d, x3))) It is straightforward to write a formula for, say, 1c > 2 > 1d. (a) 1c 1d 2 3 (b) 2 1 3 N1 4 Figure 4: DAGs for scopings in (6) and (8) 7. Extract [1/ every word] in [2/ file “1.txt”], which starts with [3/ a capital letter], but does [N1/ not] end with [4/ a capital letter]. 8. SI : (2 > 1 > 3; 1 > N1 > 4) As seen here, a negation simply introduces a chunk, which participates in outscoping relations like an NP chunk. Figure 4(b) represents the scoping in (8) as a DAG. From these examples, as long as we create two nodes in the DAG corresponding to each plural chunk, and one node corresponding to each negation, there is no need to modify the underlying model (defined in the previous section). However, when u (or v) is a negation (Ni) or an implicit universal (id) node, the probabilities pλ u,v (λ ∈{+, −, ϵ}) may come from a different source, e.g. a different classification model or the same model with a different set of features, as described in the following section. 3.3 Feature selection Previous work has shown that the lexical item of quantifiers and syntactic clues (often extracted from phrase structure trees) are good at predicting quantifier scoping. Srinivasan and Yates (2009) use the semantics of the head noun in a quantified NP to predict the scoping. MA11 also find the lexical item of the head noun to be a good predictor. In this paper, we introduce a new set of syntactic features which we found very informative: the “type” dependency features of de Marneffe et al. (2006). Adopting this new set of features, we outperform MA11’s system by a large margin. Another point to mention here is that the features that are predictive of the relative scope of quantifiers are not necessarily as helpful when determining the scope of negation and vice versa. Therefore we do not use exactly the same set of features when 68 one of the scopal terms in the pair11 is a negation, although most of the features are quite similar. 3.3.1 NP chunks We first describe the set of features we have adopted when both scopal terms in a pair are NPchunks. We have organized the features into different categories listed below. Individual NP-chunk features Following features are extracted for both NP chunks in a pair. • The part-of-speech (POS) tag of the head of chunk • The lexical item of the head noun • The lexical item of the determiner/quantifier • The lexical item of the pre-determiner • Does the chunk contain a constant (e.g. “do”, ’x’)? • Is the NP-chunk a plural? Implicit universal of a plural Remember that every plural chunk i introduces two nodes in the DAG, ic and id. Both nodes are introduced by the same chunk i, therefore they use the same set of features. The only exception is a single additional binary feature for plural NP chunks, which determines whether the given node refers to the implicit universal of the plural (i.e. id) or to the collection itself (i.e. ic). • Does this node refer to an implicit universal? Syntactic features – phrase structure tree As mentioned above, we have used two sets of syntactic features. The first is motivated by HS03’s work and is based on the constituency (i.e. phrase structure) tree T of the sentence. Since our model is based on pairwise comparison, the following features are defined for each pair of chunks. In the following, by chunk we mean the deepest phrase-level node in T dominating all the words in the chunk. If the constituency tree is correct, this node is usually an NP node. Also, P refers to the undirected path in T connecting the two chunks. • Syntactic category of the deepest common ancestor • Does 1st/2nd chunk C-command 2nd/1st one? • Length of the path P • Syntactic categories of nodes on P • Is there a conjoined node on P? • List of punctuation marks dominated by nodes on P Syntactic features – dependency tree Although regular “untyped” dependency relations do not seem to help our QSD system in the presence of phrase-structure trees, we found the col11Since our model is based on pairwise comparison, every sample is in fact a pair of nodes (u, v) of the DAG. lapsed typed dependencies (de Marneffe and Manning, 2008) very helpful, even when used on top of the phrase-structure features. Below is the list of features we extract from the collapsed typed dependency tree Td of each sentence. In the following, by noun we mean the node in Td which corresponds to the head of the chunk. The choice of the word noun, however, may be sloppy, as the head of an NP chunk may not be a noun. • Does 1st/2nd noun dominate 2nd/1st noun? • Does 1st/2nd noun immediately dominate 2nd/1st? • Type of incoming dependency relation of each noun • Syntactic category of the deepest common ancestor • Lexical item of the deepest common ancestor • Length of the undirected path between the two 3.3.2 Negations There are no sentences in our corpus with more than one negation. Therefore, for every pair of nodes with one negation, the other node must refer to an NP chunk. We use the following wordlevel, phrase-structure, and dependency features for these pairs. • Lexical item of the determiner for the NP chunk • Does the NP chunk contain a constant? • Is the NP chunk a plural? • If so, does this node refer to its implicit universal? • Does the negation C-command the NP chunk in T? • Does the NP chunk C-command the negation in T? • What is the POS of the parent p of negation in Td? • Does p dominate the noun in Td? • Does the noun dominate p in Td? • Does p immediately dominate the noun in Td? • If so, what is the type of the dependency? • Does the noun immediately dominate p in Td? • If so, what is the type of the dependency? • Length of the undirected path between the two in Td 4 Experiments QuanText contains 500 sentences with a total of 1750 chunks, that is 3.5 chunks/sentence on average. Of those, 1700 chunks are NP chunks. The rest are scopal operators, mainly negation. Of all the NP chunks, 320 (more than 18%) are plural, each introducing an implicit universal, that is, an additional node in the DAG. Since we feed each pair of elements to the classifiers independently, each (unordered) pair introduces one sample. Therefore, a sentence with n scopal elements creates C(n, 2) = n(n −1)/2 samples for classification. When all the elements are taken into account,12 the total number of samples in the corpus will be: 12Here by all elements we mean explicit chunks and the implicit universals. QuanText labels some other (implicit) elements, which we have not been handled in this work. In particular, some nouns introduce two entities: a type and a 69 X i C(ni, 2) ≈4500 (9) Where ni is the number of scopal terms introduced by sentence i. Out of the 4500 samples, around 1800 involve at least one implicit universal (i.e., id), but only 120 samples contain a negation. We evaluate the performance of the system for implicit universals and negation both separately and in the context of full scope disambiguation. We split the corpus at random into three sets of 50, 100, and 350 sentences, as development, test, and train sets respectively.13 To extract part-of-speech tags, phrase structure trees, and typed dependencies, we use the Stanford parser (Klein and Manning, 2003; de Marneffe et al., 2006) on both train and test sets. Since we are using SVM, we have passed the confidence levels through a softmax function to convert them into probabilities P λ u,v before applying the algorithm of Section 3. We take MA11’s system as the baseline. However, in order to have a fair comparison, we have used the output of the Stanford parser to automatically generate the same features that MA11 have hand-annotated.14 In order to run the baseline system on implicit universals, we take the feature vector of a plural NP and add a feature to indicate that this feature vector represents the implicit universal of the corresponding chunk. Similarly, for negation we add a feature to show that the chunk represents a negation. As shown in Section 3.3.2, we have used a more compact set of features for negations. Once again, in order to have a fair comparison, we apply a similar modification to the baseline system. We also use the exact same classifier as used in MA11.15 Figure 5(a) compares the performance of our model, which we refer to as RPC-SVM-13, with the baseline system, but only on explicit NP chunks.16 The goal for running this experiment has been to compare the performance of our model to the baseline systoken, as described by Manshadi et al. (2012). In this work, we have only considered the token entity introduced by those nouns and have ignored the type entity. 13Since the percentage of sentences with negation is small, we made sure that those sentences are distributed uniformly between three sets. 14MA11’s features are similar to part-of-speech tags and untyped dependency relations. 15SV M Multiclass from SVM-light (Joachims, 1999). 16In all experiments, we ignore NP conjunctions. Previous work treats a conjunction of NPs as separate NPs. However, similar to plurals, NP conjunctions (disjunctions) introduce an extra scopal element: a universal (existential). We are working on an annotation scheme for NP conjunctions, so we have left this for after the annotations become available. NP-Chunks only (no id or negation) σ+ P+ R+ F+ AR A Baseline (MA11) 0.762 0.638 0.484 0.550 0.59 0.47 Our model (RPC-SVM-13) 0.827 0.743 0.677 0.709 0.68 0.55 (a) Scoping explicit NP chunks Overall system (including negation and implicit universals) σ+ P+ R+ F+ AR A Baseline (MA11) 0.787 0.688 0.469 0.557 0.59 0.47 Our model (RPC-SVM-13) 0.863 0.784 0.720 0.751 0.69 0.55 (b) Scoping all elements (including id and Ni) Figure 5: Performance on QuanText data tem on the task that it was actually defined to perform (that is scoping only explicit NP chunks). As seen in this table, by incorporating a richer set of features and a better learning algorithm, our model outperforms the baseline by almost 15%. The measure A in these figures shows sentencebased accuracy. A sentence counts as correct iff every pair of scopal elements has been labeled correctly. Therefore A is a tough measure. Furthermore, it is sensitive to the length of the sentence. Following MA11, we have computed another sentence-based accuracy measure, AR. In computing AR, a sentence counts as correct iff all the outscoping relations have been recovered correctly – in other words, iff R = 100%, regardless of the value of P. AR may be more practically meaningful, because if in the correct scoping of the sentence there is no outscoping between two elements, inserting one does not affect the interpretation of the sentence. In other words, precision is less important for QSD in practice. Figure 5(b) gives the performance of the overall model when all the elements including the implicit universals and the negations are taken into account. That the F-score of our model for the second experiment is 0.042 higher than F-score for the first indicates that scoping implicit universals and/or negations must be easier than scoping explicit NP chunks. In order to find how much one or both of the two elements contribute to this gain, we have run two more experiments, scoping only the pairs with at least one implicit universal and pairs with one negation, respectively. Figure 6 reports the results. As seen, the contribution in boosting the overall performance comes from the implicit universals while negations, in fact, lower the performance. The performance for pairs with implicit universal is higher because universals, in general, 70 Implicit universals only (pairs with at least one id) P+ R+ F+ Baseline (MA11) 0.776 0.458 0.576 Our model (RPC-SVM-13) 0.836 0.734 0.782 (a) Pairs with at least one implicit universal Negation only (pairs with one negation) P+ R+ F+ Baseline (MA11) 0.502 0.571 0.534 Our model (RPC-SVM-13) 0.733 0.55 0.629 (b) Pairs with at least one negation Figure 6: Implicit universals and negations are easier to scope, even for the human annotators.17 There are several reasons for poor performance with negations as well. First, the number of negations in the corpus is small, therefore the data is very sparse. Second, the RPC model does not work well for negations. Scoping a negation relative to an NP chunk, with which it has a long distance dependency, often depends on the scope of the elements in between. Third, scoping negation usually requires a deep semantic analysis. In order to see how well our approximation algorithm is working, similar to the approach of Chambers and Jurafsky (2008), we tried an ILP solver18 for DAGs with at most 8 nodes to find the optimum solution, but we found the difference insignificant. In fact, the approximation algorithm finds the optimum solution in all but one case.19 5 Related work Since automatic QSD is in general challenging, traditionally quantifier scoping is left underspecified in deep linguistic processing systems (Alshawi and Crouch, 1992; Bos, 1996; Copestake et al., 2001). Some efforts have been made to move underspecification frameworks towards weighted constraint-based graphs in order to produce the most preferred reading (Koller et al., 2008), but the source of these types of constraint are often discourse, pragmatics, world knowledge, etc., and hence, they are hard to obtain automatically. In or17Trivially, we have taken the relation outscoping ic > id for granted and not counted it towards higher performance. 18lpsolve: http://sourceforge.net/projects/lpsolve 19To find the gain that can be obtained with gold-standard parses, we used MA11’s system with their hand-annotated and the equivalent automatically generated features. The former boost the performance by 0.04. Incidentally, HS03 lose almost 0.04 when switching to automatically generated parses. der to evade scope disambiguation, yet be able to perform entailment, Koller and Thater (2010) propose an algorithm to calculate the weakest readings20 from a scope-underspecified representation. Early efforts on automatic QSD (Moran, 1988; Hurum, 1988) were based on heuristics, manually formed into rules with manually assigned weights for resolving conflicts. To the best of our knowledge, there have been four major efforts on statistical QSD for English: Higgins and Sadock (2003), Galen and MacCartney (2004), Srinivasan and Yates (2009), and Manshadi and Allen (2011a). The first three only scope two scopal terms in a sentence, where the scopal term is an NP with an explicit quantification. MA11 is the first to scope any number of NPs in a sentence with no restriction on the type of quantification. Besides ignoring negation and implicit universals, their system has some other limitations too. First, the learning model is not theoretically justified. Second, handannotated features (e.g. dependency relations) are used on both the train and the test data. 6 Summary and future work We develop the first statistical QSD model addressing the interaction of quantifiers with negation and the implicit universal of plurals, defining a baseline for this task on QuanText data (Manshadi et al., 2012). In addition, our work improves upon Manshadi and Allen (2011a)’s work by (approximately) optimizing a well justified criterion, by using automatically generated features instead of hand-annotated dependencies, and by boosting the performance by a large margin with the help of a rich feature vector. This work can be improved in many directions, among which are scoping more elements such as other scopal operators and implicit entities, deploying more complex learning models, and developing models which require less supervision. Acknowledgement We need to thank William de Beaumont and Jonathan Gordon for their comments on the paper and Omid Bakhshandeh for his assistance. This work was supported in part by NSF grant 1012205, and ONR grant N000141110417. 20Those which can be entailed from other readings but do not entail any other reading 71 References Hiyan Alshawi and Richard Crouch. 1992. Monotonic semantic interpretation. In Proceedings of Association for Computational Linguistics, pages 32–39. Johan Bos. 1996. Predicate logic unplugged. In Proceedings of the 10th Amsterdam Colloquium, pages 133–143. Nathanael Chambers and Dan Jurafsky. 2008. Jointly combining implicit constraints improves temporal ordering. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ’08, pages 698–706, Stroudsburg, PA. William W. Cohen, Robert E. Schapire, and Yoram Singer. 1999. Learning to order things. Journal of Artificial Intelligence Research, 10:243–270. Ann Copestake, Alex Lascarides, and Dan Flickinger. 2001. An algebra for semantic construction in constraint-based grammars. In Proceedings of Association for Computational Linguistics ’01, pages 140–147. Marie-Catherine de Marneffe and Christopher D. Manning. 2008. The Stanford typed dependencies representation. In Coling 2008: Proceedings of the workshop on Cross-Framework and Cross-Domain Parser Evaluation, CrossParser ’08, pages 1–8. Marie-Catherine de Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure trees. In Proceedings of International Conference on Language Resources and Evaluation ’12. Johannes Furnkranz and Eyke Hullermeier. 2003. Pairwise preference learning and ranking. In Proceedings of the 14th European Conference on Machine Learning, volume 2837, pages 145–156. Andrew Galen and Bill MacCartney. 2004. Statistical resolution of scope ambiguity in natural language. http://nlp.stanford.edu/nlkr/scoper.pdf. Fritz Hamm and Edward W. Hinrichs. 2010. Plurality and Quantification. Studies in Linguistics and Philosophy. Springer. Aurelie Herbelot and Ann Copestake. 2010. Annotating underquantification. In Proceedings of the Fourth Linguistic Annotation Workshop, LAW IV ’10, pages 73–81. Derrick Higgins and Jerrold M. Sadock. 2003. A machine learning approach to modeling scope preferences. Computational Linguistics, 29(1):73–96. Eyke Hullermeier, Johannes Furnkranz, Weiwei Cheng, and Klaus Brinker. 2008. Label ranking by learning pairwise preferences. Artificial Intelligence, 172(1617):1897 – 1916. Sven Hurum. 1988. Handling scope ambiguities in English. In Proceedings of the second conference on Applied natural language processing, ANLC ’88, pages 58–65. Thorsten Joachims. 1999. Making large-scale support vector machine learning practical. In Bernhard Sch¨olkopf, Christopher J. C. Burges, and Alexander J. Smola, editors, Advances in kernel methods, pages 169–184. MIT Press, Cambridge, MA, USA. Dan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1, ACL ’03, pages 423– 430. Alexander Koller and Stefan Thater. 2010. Computing weakest readings. In Proceedings of the 48th Annual Meeting on Association for Computational Linguistics, Uppsala, Sweden. Alexander Koller, Michaela Regneri, and Stefan Thater. 2008. Regular tree grammars as a formalism for scope underspecification. In Proceedings of Annual Meeting on Association for Computational Linguistics and Human Language Technologies ’08. Fred Landmann. 2000. Events and plurality. Kluwer Academic Publishers, Dordrecht. Percy Liang, Michael I. Jordan, and Dan Klein. 2011. Learning dependency-based compositional semantics. In Proceedings of Association for Computational Linguistics (ACL). Mehdi Manshadi and James Allen. 2011a. Unrestricted quantifier scope disambiguation. In Proceedings of Association for Computational Linguistics ’11, Workshop on Graph-based Methods for NLP (TextGraph-6). Mehdi Manshadi, James Allen, and Mary Swift. 2011b. A corpus of scope-disambiguated English text. In Proceedings of Association for Computational Linguistics and Human Language Technologies ’11: short papers, pages 141–146. Mehdi Manshadi, James Allen, and Mary Swift. 2012. An annotation scheme for quantifier scope disambiguation. In Proceedings of International Conference on Language Resources and Evaluation ’12. Douglas Moran. 1988. Quantifier scoping in the SRI core language engine. In Proceedings of the 26th Annual Meeting on Association for Computational Linguistics. Lance Ramshaw and Mitch Marcus. 1995. Text Chunking Using Transformation-Based Learning. In Proceedings of the Third Workshop on Very Large Corpora, pages 82–94, Somerset, New Jersey. Prakash Srinivasan and Alexander Yates. 2009. Quantifier scope disambiguation using extracted pragmatic knowledge: preliminary results. In Proceedings of EMNLP ’09. 72
2013
7
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 710–720, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Using Conceptual Class Attributes to Characterize Social Media Users Shane Bergsma and Benjamin Van Durme Department of Computer Science and Human Language Technology Center of Excellence Johns Hopkins University Baltimore, MD 21218, USA Abstract We describe a novel approach for automatically predicting the hidden demographic properties of social media users. Building on prior work in common-sense knowledge acquisition from third-person text, we first learn the distinguishing attributes of certain classes of people. For example, we learn that people in the Female class tend to have maiden names and engagement rings. We then show that this knowledge can be used in the analysis of first-person communication; knowledge of distinguishing attributes allows us to both classify users and to bootstrap new training examples. Our novel approach enables substantial improvements on the widelystudied task of user gender prediction, obtaining a 20% relative error reduction over the current state-of-the-art. 1 Introduction There has been growing interest in characterizing social media users based on the content they generate; that is, automatically labeling users with demographic categories such as age and gender (Burger and Henderson, 2006; Schler et al., 2006; Rao et al., 2010; Mukherjee and Liu, 2010; Pennacchiotti and Popescu, 2011; Burger et al., 2011; Van Durme, 2012). Automatic user characterization has applications in targeted advertising and personalization, and could also lead to finergrained assessment of public opinion (O’Connor et al., 2010) and health (Paul and Dredze, 2011). Consider the following tweet and suppose we wish to predict the user’s gender: Dirac was one of my boyhood heroes. I’m glad I met him once. RT Paul Dirac image by artist Eric Handy: http:... State-of-the-art approaches cast this problem as a classification task and train classifiers using supervised learning (Section 2). The features of the classifier are indicators of specific words in the user-generated text. While a human would assume that someone with boyhood heroes is male, a standard classifier has no way of exploiting such knowledge unless the phrase occurs in training data. We present an algorithm that improves user characterization by collecting and exploiting such common-sense knowledge. Our work is inspired by algorithms that processes large text corpora in order to discover the attributes of semantic classes, e.g. (Berland and Charniak, 1999; Schubert, 2002; Almuhareb and Poesio, 2004; Tokunaga et al., 2005; Girju et al., 2006; Pas¸ca and Van Durme, 2008; Alfonseca et al., 2010). We learn the distinguishing attributes of different demographic groups (Section 3), and then automatically assign users to these groups whenever they refer to a distinguishing attribute in their writings (Section 4). Our approach obviates the need for expensive annotation efforts, and allows us to rapidly bootstrap training data for new classification tasks. We validate our approach by advancing the state-of-the-art on the most well-studied user classification task: predicting user gender (Section 5). Our bootstrapped system, trained purely from automatically-annotated Twitter data, significantly reduces error over a state-of-the-art system trained on thousands of gold-standard training examples. 2 Supervised User Characterization The current state-of-the-art in user characterization is to use supervised classifiers trained on annotated data. For each instance to be classified, the output is a decision about a distinct demographic property, such as Male/Female or Over/Under-18. A variety of classification algorithms have been employed, including SVMs (Rao et al., 2010), de710 cision trees (Pennacchiotti and Popescu, 2011), logistic regression (Van Durme, 2012), and the Winnow algorithm (Burger et al., 2011). Content Features: BoW Prior classifiers use a set of features encoding the presence of specific words in the user-generated text. We call these features BoW features as they encode the standard Bag-of-Words representation which has been highly effective in text categorization and information retrieval (Sebastiani, 2002). User-Profile Features: Usr Some researchers have explored features for user-profile metainformation in addition to user content. This may include the user’s communication behavior and network of contacts (Rao et al., 2010), their full name (Burger et al., 2011) and whether they provide a profile picture (Pennacchiotti and Popescu, 2011). We focus on the case where we only have access to the user’s screen-name (a.k.a. username). Using a combination of content and username features “represents a use case common to many different social media sites, such as chat rooms and news article comment streams” (Burger et al., 2011). We refer to features derived from a username as Usr features in our experiments. 3 Learning Class Attributes We aim to improve the automated classification of users into various demographic categories by learning and applying the distinguishing attributes of those categories, e.g. that males have boyhood heroes. Our approach builds on lexical-semantic research on the topic of class-attribute extraction. In this research, the objective is to discover various attributes or parts of classes of entities. For example, Berland and Charniak (1999) learn that the class car has parts such as headlight, windshield, dashboard, etc. Berland and Charniak extract these attributes by mining a corpus for fillers of patterns such as ‘car’s X’ or ‘X of a car’. Note their patterns explicitly include the class itself (car). Another approach is to use patterns that are based on instances (i.e. hyponyms or sub-classes) of the class. For example, Pas¸ca and Van Durme (2007) learn the attributes of the class car via patterns involving instances of cars, e.g. Chevrolet Corvette’s X and X of a Honda Civic. For these approaches, lists of instances are typically collected from publicly-available resources such as WordNet or Wikipedia (Pas¸ca and Van Durme, 2007; Van Durme et al., 2008), acquired automatically from corpora (Pas¸ca and Van Durme, 2008; Alfonseca et al., 2010), or simply specified by hand (Schubert, 2002). Creation of Instance Lists We use an instancebased approach; our instances are derived from collections of common nouns that are associated with roles and occupations of people. For the gender task that we study in our experiments, we acquire class instances by filtering the dataset of nouns and their genders created by Bergsma and Lin (2006). This dataset indicates how often a noun is referenced by a male, female, neutral or plural pronoun. We extract prevalent common nouns for males and females by selecting only those nouns that (a) occur more than 200 times in the dataset, (b) mostly occur with male or female pronouns, and (c) occur as lower-case more often than upper-case in a web-scale N-gram corpus (Lin et al., 2010). We then classify a noun as Male (resp. Female) if the noun is indicated to occur with male (resp. female) pronouns at least 85% of the time. Since the gender data is noisy, we also quickly pruned by hand any instances that were malformed or obviously incorrectly assigned by our automatic process. This results in 652 instances in total. Table 1 provides some examples. Male: bouncer, altar boy, army officer, dictator, assailant, cameraman, drifter, chauffeur, bad guy Female: young lady, lesbian, ballerina, waitress, granny, chairwoman, heiress, soprano, socialite Table 1: Example instances used for extraction of class attributes for the gender classification task Attribute Extraction We next collect and rank attributes for each class. We first look for fillers of attribute-patterns involving each of the instances. Let I represent an instance of one of our classes. We find fillers of the single high-precision pattern: {word=I,tag=NN} | {z } instance {word=’s} | {z } ’s [{word=.*}* {tag=N.*}] | {z } attribute (E.g. dictator ’s [former mistress]). The expression “tag=NN” means that I must be tagged as a noun. The expression in square brackets is the filler, i.e. the extracted attribute, A. The notation “{word=.*}* tag=N.*” means that A can be any sequence of tokens ending in a noun. We use an 711 equivalent pattern when I is multi-token. The output of this process is a set of (I,A) pairs. In attribute extraction, typically one must choose between the precise results of rich patterns (involving punctuation and parts-of-speech) applied to small corpora (Berland and Charniak, 1999) and the high-coverage results of superficial patterns applied to web-scale data, e.g. via the Google API (Almuhareb and Poesio, 2004). We obtain the best of both worlds by matching our precise pattern against a version of the Google Ngram Corpus that includes the part-of-speech tag distributions for every N-gram (Lin et al., 2010). We found that applying this pattern to web-scale data is effective in extracting useful attributes. We acquired around 20,000 attributes in total. Finding Distinguishing Attributes Unlike prior work, we aim to find distinguishing properties of each class; that is, the kinds of properties that uniquely distinguish a particular category. Prior work has mostly focused on finding “relevant” attributes (Alfonseca et al., 2010) or “correct” parts (Berland and Charniak, 1999). A leg is a relevant and correct part of both a male and a female (and many other living and inanimate objects), but it does not help us distinguish males from females in social media. We therefore rank our attributes for each class by their strength of association with instances of that specific class.1 To calculate the association, we first disregard the count of each (I,A) pair and consider each unique pair to be a single probabilistic event. We then convert the (I,A) pairs to corresponding (C,A) pairs by replacing I with the corresponding class, C. We then calculate the pointwise mutual information (Church and Hanks, 1990) between each C and A over the set of events: PMI(C, A) = log p(C, A) p(C)p(A) (1) If the PMI>0, the observed probability of a class and attribute co-occurring is greater than the probability of co-occurrence that we would expect if C and A were independently distributed. For each class, we rank the attributes by their PMI scores. 1Reisinger and Pas¸ca (2009) considered the related problem of finding the most appropriate class for each attribute; they take an existing ontology of concepts (WordNet) as a class hierarchy and use a Bayesian approach to decide “the correct level of abstraction for each attribute.” Filtering Attributes We experimented with two different methods to select a final set of distinguishing attributes for each class: (1) we used a threshold to select the top-ranked attributes for each class, and (2) we manually filtered the attributes. For the gender classification task, we manually filtered the entire set of attributes to select around 1000 attributes that were judged to be discriminative (two thirds of which are female). This filtering took one annotator only a few hours to complete. Because this process was so trivial, we did not invest in developing annotation guidelines or measuring inter-annotator agreement. We make these filter attributes available online as an attachment to this article, available through the ACL Anthology. Ultimately, we discovered that manual filtering was necessary to avoid certain pathological cases in our Twitter data. For example, our PMI scoring finds homepage to be strongly associated with males. In our gold-standard gender data (Section 5), however, every user has a homepage [by dataset construction]; we might therefore incorrectly classify every user as Male. We agree with Richardson et al. (1998) that “automatic procedures ... provide the only credible prospect for acquiring world knowledge on the scale needed to support common-sense reasoning” but “hand vetting” might be needed to ensure “accuracy and consistency in production level systems.” Since our approach requires manual involvement in the filtering of the attribute list, one might argue that one should simply manually enumerate the most relevant attributes directly. However, the manual generation of conceptual features by a single researcher results in substantial variability both across and within participants (McRae et al., 2005). Psychologists therefore generate such lists by pooling the responses across many participants: future work may compare our “automatically generate, manually prune” approach to soliciting attributes via crowdsourcing.2 Table 2 gives examples of our extracted at2One can also view the work of manually filtering attributes as a kind of “feature labeling.” There is evidence from Zaidan et al. (2007) that a few hours of feature labeling can be more productive than annotating new training examples. In fact, since Zaidan et al. (2007) label features at the token level (e.g., in our case one would highlight “handbag” in a given tweet), while we label features at the type level (e.g., deciding whether to mark the word “handbag” as feminine in general), our process is likely even more efficient. Future work may also wish to consider this connection to socalled ”annotator rationales” more deeply. 712 Male: wife, widow, wives, ex-girlfriend, erection, testicles, wet dream, bride, buddies, exwife, first-wife, penis, death sentence, manhood Female: vagina, womb, maiden name, dresses, clitoris, wedding dress, uterus, shawl, necklace, ex-husband, ex-boyfriend, dowry, nightgown Table 2: Example attributes for gender classes, in descending order of class-association score tributes. Our approach captures many multi-token attributes; these are often distinguishing even though the head noun is ambiguous (e.g. name is ambiguous, maiden name is not). Our attributes also go beyond the traditional meronyms that were the target of earlier work. As we discuss further in Related Work (Section 7), previous researchers have worried about a proper definition of parts or attributes and relied on human judgments for evaluation (Berland and Charniak, 1999; Girju et al., 2006; Van Durme et al., 2008). For us, whether a property such as dowry should be considered an “attribute” of the class Female is immaterial; we echo Almuhareb and Poesio (2004) who (on a different task) noted that “while the notion of ‘attribute’ is not completely clear... our results suggest that trying to identify attributes is beneficial.” 4 Applying Class Attributes To classify users using the extracted attributes, we look for cases where users refer to such attributes in their first-person writings. We performed a preliminary analysis of a two-week sample of tweets from the TREC Tweets2011 Corpus.3 We found that users most often reveal their attributes in the possessive construction, “my X” where X is an attribute, quality or event that they possess (in a linguistic sense). For example, we found over 1000 tweets with the phrase “my wife.” In contrast, “I have a wife” occurs only 5 times.4 We therefore assign a user to a demographic category as follows: We first part-of-speech tag our data using CRFTagger (Phan, 2006) and then look for “my X” patterns where X is a sequence of tokens terminating in a noun, analogous to our 3http://trec.nist.gov/data/tweets/ This corpus was developed for the TREC Microblog track (Soboroff et al., 2012). 4Note that “I am a man” occurs only 20 times. Users also reveal their names in “my name is X” patterns in several hundred tweets, but this is small compared to cases of selfdistinguishing attributes. Exploiting these alternative patterns could nevertheless be a possible future direction. attribute-extraction pattern (Section 3).5 When a user uses such a “my X” construction, we match the filler X against our attribute lists for each class. If the filler is on a list, we call it a selfdistinguishing attribute of a user. We then apply our knowledge of the self-distinguishing attribute and its corresponding class in one of the following three ways: (1) ARules: Using Attribute-Based Rules to Override a Classifier When human-annotated data is available for training and testing a supervised classifier, we refer to it as gold standard data. Our first technique provides a simple way to use our identified self-distinguishing attributes in conjunction with a classifier trained on gold-standard data. If the user has any selfdistinguishing attributes, we assign the user to the corresponding class; otherwise, we trust the output of the classifier. (2) Bootstrapped: Automatic Labeling of Training Examples Even without gold standard training data, we can use our self-distinguishing attributes to automatically bootstrap annotations. We collect a large pool of unlabeled users and their tweets, and we apply the ARules described above to label those users that have self-distinguishing attributes. Once an example is auto-annotated, we delete the self-distinguishing attributes from the user’s content. This prevents the subsequent learning algorithm from trivially learning the rules with which we auto-annotated the data. Next, the auto-annotated examples are used as training data for a supervised system.6 Finally, when applying the Bootstrapped classifiers, we can still apply the ARules as a post-process (although in practice this made little difference in our final results). (3) BootStacked: Gold Standard and Bootstrapped Combination Although we show that an accurate classifier can be trained using autoannotated Bootstrapped data alone, we also test whether we can combine this data with any goldstandard training examples to achieve even better performance. We use the following simple but 5While we used an “off the shelf” POS tagger in this work, we note that taggers optimized specifically for social media are now available and would likely have resulted in higher tagging accuracy (e.g. Owoputi et al. (2013)). 6Note that while our target gender task presents mutuallyexclusive output classes, we can still train classifiers for other categories without clear opposites (e.g. for labeling users as Parents or Doctors) by using the 1-class classification paradigm (Koppel and Schler, 2004). 713 effective method for combining data from these two sources, inspired by prior techniques used in the domain adaptation literature (Daum´e III and Marcu, 2006). We first use the trained Bootstrapped system to make predictions on the entire set of gold standard data (gold train, development, and test sets). We then use these predictions as features in a classifier trained on the gold standard data. We refer to this system as the BootStacked system in our evaluation. 5 Twitter Gender Prediction To test the use of self-distinguishing attributes in user classification, we apply our methods to the task of gender classification on Twitter. This is an important and intensely-studied task within academia and industry. Furthermore, for this task it is possible to semi-automatically acquire large amounts of ground truth (Burger et al., 2011). We can therefore benchmark our approach against state-of-the-art supervised systems trained with plentiful gold-standard data, giving us an idea of how well our Bootstrapped system might compare to theoretically top-performing systems on other tasks, domains, and social media platforms where such gold-standard training data is not available. Gold Data Our data is derived from the corpus created by Burger et al. (2011). Burger et al. observed that many Twitter users link their Twitter profile to homepages on popular blogging websites. Since “many of these [sites] have wellstructured profile pages [where users] must select gender and other attributes from dropdown menus,” they were able to link these attributes to the Twitter users. Using this process, they created a large multi-lingual corpus of Twitter users and genders. We filter non-English tweets from this corpus using the LID system of Bergsma et al. (2012) and also tweets containing URLs (since many of these are spam) and re-tweets. We then filter users with <40 tweets and randomly divide the remaining users into 2282 training, 1140 development, and 1141 test examples. Classifier Set-up We train logistic-regression classifiers on this gold standard data via the LIBLINEAR package (Fan et al., 2008). We optimize the classifier’s regularization parameter on development data and report final results on the heldout test examples. We also report the results of our new attribute-based strategies (Section 4) on the test data. We report accuracy: the percentage of examples labeled correctly. Our classifiers use both BoW and Usr features (Section 2). To increase the generality of our BoW features, we preprocess the text by lowercasing and converting all digits to special ‘#’ symbols. We then create real-valued features that encode the log-count of each word in the input. While Burger et al. (2011) found “no appreciable difference in performance” when using either binary presence/absence features or encoding the frequency of the word, we found real-valued features worked better in development experiments. For the Usr features, we add special beginning and ending characters to the username, and then create features for all character n-grams of length twoto-four in the modified username string. We include n-gram features with the original capitalization pattern and separate features with the n-grams lower-cased. Unlabeled Data For Bootstrapped training, we also use a pool of unlabeled Twitter data. This pool comprises the union of 2.2 billion tweets from 05/2009 to 10/2010 (O’Connor et al., 2010), 1.9 billion tweets collected from 07/2011 to 11/2012, and 80 million tweets collected from the followers of 10-thousand location and languagespecific Twitter feeds. We filter this corpus as above, except we do not put any restrictions on the number of tweets needed per user. We also filter any users that overlap with our gold standard data. Bootstrapping Analysis We apply our Bootstrapped auto-annotation strategy to this unlabeled data, yielding 789,285 auto-annotated examples of users and their tweets. The decisions of our bootstrapping process reflect the true gender distribution; the auto-annotated data is 60.5% Female, remarkably close to the 60.9% proportion in our gold standard test set. Figure 1 shows that a wide range of self-distinguishing attributes are used in the auto-annotation process. This is important because if only a few attributes are used (e.g. wife/husband or penis/vagina), we might systematically miss a segment of users (e.g. young people that don’t have husbands or wives, or people that don’t frequently talk about their genitalia). Thus a wide range of common-sense knowledge is useful for bootstrapping, which is one reason why automatic approaches are needed to acquire it. 714 0 50000 100000 150000 200000 engagement ring ♀ Note: showing only first 10% of attributes used boyfriend ♀ hubby ♀ bra ♀ future wife ♂ natural hair ♀ jewelry ♀ bride ♂ beard ♂ due date ♀ wife ♂ husband ♀ tux ♂ purse ♀ Figure 1: Frequency with which attributes are used to auto-annotate examples in the bootstrapping approach. The plot identifies some attributes and their corresponding class (labeled via gender symbol). Majority-class baseline 60.9 Supervised on 100 examples 72.0 Supervised on 2282 examples 84.0 Supervised on 100 examples + ARules 74.7 Supervised on 2282 examples + ARules 84.7 Bootstrapped 86.0 BootStacked 87.2 Table 3: Classification accuracy (%) on gold standard test data for user gender prediction on Twitter 6 Results Our main classification results are presented in Table 3. The majority-class baseline for this task is to always choose Female; this achieves an accuracy of 60.9%. A standard classifier trained on 100 gold-standard training examples improves over this baseline, to 72.0%, while one with 2282 training examples achieves 84.0%. This latter result represents the current state-of-the-art: a classifier trained on thousands of gold standard examples, making use of both Usr and BoW features. Our performance compares favourably to Burger et al. (2011), who achieved 81.4% using the same features, but on a very different subset of the data (also including tweets in other languages).7 Applying the ARules as a post-process significantly improves performance in both cases (McNemar’s, p<0.05). It is also possible to use the ARules as a stand-alone system rather than as a post-process, however the coverage is low: we find a distinguishing attribute in 18.3% of the 695 Female instances in the test data, and make the cor7Note that it is possible to achieve even higher performance on gender classification in social media if you have further information about a user, such as their full first and last name (Burger et al., 2011; Bergsma et al., 2013). rect decision in 96.9% of these cases. We find a distinguishing attribute in 11.4% of the 446 Male instances, with 86.3% correct decisions. The Bootstrapped system substantially improves over the state-of-the-art, achieving 86% accuracy and doing so without using any gold standard training data. This is important because having thousands of gold standard annotations for every possible user characterization task, in every domain and social media platform, is not realistic. Combining the bootstrapped classifier with the gold standard annotations in the BootStacked model results in further gains in performance.8 These results provide strong validation for both the inherent utility of class-attributes knowledge in user characterization and the effectiveness of our specific strategies for exploiting such knowledge. Figure 2 shows the learning curve of the Bootstrapped classifier. Performance rises consistently across all the auto-annotated training data; this is encouraging because there is theoretically no reason not to vastly increase the amount of autoannotated data by collecting an even larger collection of tweets. Finally, note that most of the gains of the Bootstrapped system appear to derive from the tweet content itself, i.e. the BoW features. However, the Usr features are also helpful at most training sizes. We provide some of the top-ranked features of the Bootstrapped system in Table 4. We see that a variety of other common-sense knowledge is learned by the system (e.g., the association between males and urinals, boxers, fatherhood, etc.), as well as stylistic clues (e.g. Female users using betcha and xox in their writing). The username 8We observed no further gains in accuracy when applying the ARules as a post-process on top of these systems. 715 60 65 70 75 80 85 90 100 1000 10000 100000 1e+06 Accuracy Number of auto-annotated training pts. BoW+Usr BoW Usr Figure 2: Learning curve for Bootstrapped logistic-regression classifier, with automaticallylabeled data, for different feature classes. features capture reasonable associations between gender classes and particular names (such as mike, tony, omar, etc.) and also between gender classes and common nouns (such as guy, dad, sir, etc.). 7 Related Work User Characterization The field of sociolinguistics has long been concerned with how various morphological, phonological and stylistic aspects of language can vary with a person’s age, gender, social class, etc. (Fischer, 1968; Labov, 1972). This early work therefore had an emphasis on analyzing the form of language, as opposed to its content. This emphasis continued into early machine learning approaches, which predicted author properties based on the usage of function words, partsof-speech, punctuation (Koppel et al., 2002) and spelling/grammatical errors (Koppel et al., 2005). Recently, researchers have focused less on the sociolinguistic implications and more on the tasks themselves, naturally leading to classifiers with feature representations capturing content in addition to style (Schler et al., 2006; Garera and Yarowsky, 2009; Mukherjee and Liu, 2010). Our work represents a logical next step for contentbased classification, a step partly suggested by Schler et al. (2006) who noted that “those who are interested in automatically profiling bloggers for commercial purposes would be well served by considering additional features - which we deliberately ignore in this study - such as author selfidentification.” Male BoW features: wife, wifey, sucked, shave, boner, boxers, missus, installed, manly, in-laws, brah, urinal, kickoff, golf, comics, ubuntu, homo, nhl, jedi, fatherhood, nigga, movember, algebra Male Usr features: boy, mike, ben, guy, mr, dad, jr, kid, tony, dog, lord, sir, omar, dude, man, big Female BoW features: hubby, hubs, jewelry, sewing, mascara, fabulous, bf, softball, betcha, motherhood, perky, cozy, zumba, xox, cuddled, belieber, bridesmaid, anorexic, jammies, pad Female Usr features: mrs, mom, jen, lady, wife, mary, joy, mama, pink, kim, diva, elle, woma, ms Table 4: Examples of highly-weighted BoW (content) and Usr (username) features (in descending order of weight) in the Bootstrapped system for predicting user gender in Twitter. Many recent papers have analyzed the language of social media users, along dimensions such as ethnicity (Eisenstein et al., 2011; Rao et al., 2011; Pennacchiotti and Popescu, 2011; Fink et al., 2012) time zone (Kiciman, 2010), political orientation (Rao et al., 2010; Pennacchiotti and Popescu, 2011) and gender (Rao et al., 2010; Burger et al., 2011; Van Durme, 2012). Class-Attribute Extraction The idea of using simple patterns to extract useful semantic relations goes back to Hearst (1992) who focused on hyponyms. Hearst reports that she “tried applying this technique to meronymy (i.e., the part/whole relation), but without great success.” Berland and Charniak (1999) did have success using Hearststyle patterns for part-whole detection, which they attribute to their “very large corpus and the use of more refined statistical measures for ranking the output.” Girju et al. (2006) devised a supervised classification scheme for part/whole relation discovery that integrates the evidence from multiple patterns. These efforts focused exclusively on the meronymy relation as used in WordNet (Miller et al., 1990). Indeed, Berland and Charniak (1999) attempted to filter out attributes that were regarded as qualities (like driveability) rather than parts (like steering wheels) by removing words ending with the suffixes -ness, -ing, and -ity. In our work, such qualities are not filtered and are ultimately valuable in classification; for example, the attributes peak fertility and loveliness are highly 716 associated with females. As subsequent research became more focused on applications, looser definitions of class attributes were adopted. Almuhareb and Poesio (2004) automatically mined class attributes that include parts, qualities, and those with an “agentive” or “telic” role with the class. Their extended set of attributes was shown to enable an improved representation of nouns for the purpose of clustering these nouns into semantic concepts. Tokunaga et al. (2005) define attributes as properties that can serve as focus words in questions about a target class; e.g. director is an attribute of a movie since one might ask, “Who is the director of this movie?” Another line of research has been motivated by the observation that much of Internet search consists of people looking for values of various class attributes (Bellare et al., 2007; Pas¸ca and Van Durme, 2007; Pas¸ca and Van Durme, 2008; Alfonseca et al., 2010). By knowing the attributes of different classes, search engines can better recognize that queries such as “altitude guadalajara” or “population guadalajara” are seeking values for a particular city’s “altitude” and “population” attributes (Pas¸ca and Van Durme, 2007). Finally, note that Van Durme et al. (2008) compared instance-based and class-based patterns for broad-definition attribute extraction, and found both to be effective. Of course, text-mining with custom-designed patterns is not the only way to extract classattribute information. Experts can manually specify the attributes of entities, as in the WordNet project (Miller et al., 1990). Others have automatically extracted attribute relations from dictionary definitions (Richardson et al., 1998), structured online sources such as Wikipedia infoboxes, (Wu and Weld, 2007) and large-scale collections of high-quality tabular web data (Cafarella et al., 2008). Attribute extraction has also been viewed as a sub-component or special case of the information obtained by general-purpose knowledge extractors (Schubert, 2002; Pantel and Pennacchiotti, 2006). NLP Applications of Common-Sense Knowledge The kind of information derived from class-attribute extraction is sometimes referred to as a type of common-sense knowledge. The need for computer programs to represent commonsense knowledge has been recognized since the work of McCarthy (1959). Lenat et al. (1990) defines common sense as “human consensus reality knowledge: the facts and concepts that you and I know and which we each assume the other knows.” While we are the first to exploit commonsense knowledge in user characterization, common sense has been applied to a range of other problems in natural language processing. In many ways WordNet can be regarded as a collection of common-sense relationships. WordNet has been applied in a myriad of NLP applications, including in seminal works on semantic-role labeling (Gildea and Jurafsky, 2002), coreference resolution (Soon et al., 2001) and spelling correction (Budanitsky and Hirst, 2006). Also, many approaches to the task of sentiment analysis “begin with a large lexicon of words marked with their prior polarity” (Wilson et al., 2009). Like our class-attribute associations, the common-sense knowledge that the word cool is positive while unethical is negative can be learned from associations in web-scale data (Turney, 2002). We might also view information about synonyms or conceptually-similar words as a kind of commonsense knowledge. In this perspective, our work is related to recent work that has extracted distributionally-similar words from web-scale data and applied this knowledge in tasks such as named-entity recognition (Lin and Wu, 2009) and dependency parsing (T¨ackstr¨om et al., 2012). 8 Conclusion We have proposed, developed and successfully evaluated a novel approach to user characterization based on exploiting knowledge of user class attributes. The knowledge is obtained using a new algorithm that discovers distinguishing attributes of particular classes. Our approach to discovering distinguishing attributes represents a significant new direction for research in class-attribute extraction, and provides a valuable bridge between the fields of user characterization and lexical knowledge extraction. We presented three effective techniques for leveraging this knowledge within the framework of supervised user characterization: rule-based post-processing, a learning-by-bootstrapping approach, and a stacking approach that integrates the predictions of the bootstrapped system into a system trained on annotated gold-standard training data. All techniques lead to significant improve717 ments over state-of-the-art supervised systems on the task of Twitter gender classification. While our technique has advanced the state-ofthe-art on this important task, our approach may prove even more useful on other tasks where training on thousands of gold-standard examples is not even an option. Currently we are exploring the prediction of finer-grained user roles, such as student, waitress, parent, and so forth, based on extensions to the process laid out here. References Enrique Alfonseca, Marius Pas¸ca, and Enrique Robledo-Arnuncio. 2010. Acquisition of instance attributes via labeled and related instances. In Proc. SIGIR, pages 58–65. Abdulrahman Almuhareb and Massimo Poesio. 2004. Attribute-based and value-based clustering: An evaluation. In Proc. EMNLP, pages 158–165. Kedar Bellare, Partha P. Talukdar, Giridhar Kumaran, Fernando Pereira, Mark Liberman, Andrew McCallum, and Mark Dredze. 2007. Lightly-Supervised Attribute Extraction. In NIPS Workshop on Machine Learning for Web Search. Shane Bergsma and Dekang Lin. 2006. Bootstrapping path-based pronoun resolution. In Proc. ColingACL, pages 33–40. Shane Bergsma, Paul McNamee, Mossaab Bagdouri, Clayton Fink, and Theresa Wilson. 2012. Language identification for creating language-specific Twitter collections. In Proceedings of the Second Workshop on Language in Social Media, pages 65–74. Shane Bergsma, Mark Dredze, Benjamin Van Durme, Theresa Wilson, and David Yarowsky. 2013. Broadly improving user classification via communication-based name and location clustering on twitter. In Proc. NAACL. Matthew Berland and Eugene Charniak. 1999. Finding parts in very large corpora. In Proc. ACL, pages 57–64. Alexander Budanitsky and Graeme Hirst. 2006. Evaluating WordNet-based measures of lexical semantic relatedness. Computational Linguistics, 32(1):13– 47. John D. Burger and John C. Henderson. 2006. An exploration of observable features related to blogger age. In Proc. AAAI Spring Symposium: Computational Approaches to Analyzing Weblogs, pages 15– 20. John D. Burger, John Henderson, George Kim, and Guido Zarrella. 2011. Discriminating gender on Twitter. In Proc. EMNLP, pages 1301–1309. Michael J. Cafarella, Alon Y. Halevy, Daisy Zhe Wang, Eugene Wu, and Yang Zhang. 2008. WebTables: exploring the power of tables on the web. Proc. PVLDB, 1(1):538–549. Kenneth W. Church and Patrick Hanks. 1990. Word association norms, mutual information, and lexicography. Computational Linguistics, 16(1). Hal Daum´e III and Daniel Marcu. 2006. Domain adaptation for statistical classifiers. Journal of Artificial Intelligence Research, 26. Jacob Eisenstein, Noah A. Smith, and Eric P. Xing. 2011. Discovering sociolinguistic associations with structured sparsity. In Proc. ACL, pages 1365–1374. Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, XiangRui Wang, and Chih-Jen Lin. 2008. LIBLINEAR: A library for large linear classification. J. Mach. Learn. Res., 9:1871–1874. Clayton Fink, Jonathon Kopecky, Nathan Bos, and Max Thomas. 2012. Mapping the Twitterverse in the developing world: An analysis of social media use in Nigeria. In Proc. International Conference on Social Computing, Behavioral Modeling, and Prediction, pages 164–171. John L. Fischer. 1968. Social influences on the choice of a linguistic variant. Word, 14:47–56. Nikesh Garera and David Yarowsky. 2009. Modeling latent biographic attributes in conversational genres. In Proc. ACL-IJCNLP, pages 710–718. Daniel Gildea and Daniel Jurafsky. 2002. Automatic labeling of semantic roles. Computational Linguistics, 28:245–288. Roxana Girju, Adriana Badulescu, and Dan Moldovan. 2006. Automatic discovery of part-whole relations. Computational Linguistics, 32(1):83–135. Marti A. Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In Proc. Coling, pages 539–545. Emre Kiciman. 2010. Language differences and metadata features on Twitter. In Proc. SIGIR 2010 Web N-gram Workshop, pages 47–51. Moshe Koppel and Jonathan Schler. 2004. Authorship verification as a one-class classification problem. In Proc. ICML, pages 489–495. Moshe Koppel, Shlomo Argamon, and Anat Rachel Shimoni. 2002. Automatically categorizing written texts by author gender. Literary and Linguistic Computing, 17(4):401–412. Moshe Koppel, Jonathan Schler, and Kfir Zigdon. 2005. Determining an author’s native language by mining a text for errors. In Proc. KDD, pages 624– 628. 718 William Labov. 1972. Sociolinguistic Patterns. University of Pennsylvania Press. Douglas B. Lenat, R. V. Guha, Karen Pittman, Dexter Pratt, and Mary Shepherd. 1990. CYC: toward programs with common sense. Commun. ACM, 33(8):30–49. Dekang Lin and Xiaoyun Wu. 2009. Phrase clustering for discriminative learning. In Proc. ACL-IJCNLP, pages 1030–1038. Dekang Lin, Kenneth Church, Heng Ji, Satoshi Sekine, David Yarowsky, Shane Bergsma, Kailash Patil, Emily Pitler, Rachel Lathbury, Vikram Rao, Kapil Dalwani, and Sushant Narsale. 2010. New tools for web-scale N-grams. In Proc. LREC, pages 2221– 2227. John McCarthy. 1959. Programs with common sense. In Proc. Teddington Conference on the Mechanization of Thought Processes, pages 75–91. London: Her Majesty’s Stationery Office. Ken McRae, George S. Cree, Mark S. Seidenberg, and Chris McNorgan. 2005. Semantic feature production norms for a large set of living and nonliving things. Behavior Research Methods, 37(4):547– 559. George A. Miller, Richard Beckwith, Christiane Fellbaum, Derek Gross, and Katherine J. Miller. 1990. Introduction to WordNet: an on-line lexical database. International Journal of Lexicography, 3(4). Arjun Mukherjee and Bing Liu. 2010. Improving gender classification of blog authors. In Proc. EMNLP, pages 207–217. Brendan O’Connor, Ramnath Balasubramanyan, Bryan R. Routledge, and Noah A. Smith. 2010. From tweets to polls: Linking text sentiment to public opinion time series. In Proc. ICWSM, pages 122–129. Olutobi Owoputi, Brendan O’Connor, Chris Dyer, Kevin Gimpel, Nathan Schneider, and Noah A. Smith. 2013. Improved part-of-speech tagging for online conversational text with word clusters. In Proc. of NAACL. Patrick Pantel and Marco Pennacchiotti. 2006. Espresso: leveraging generic patterns for automatically harvesting semantic relations. In Proc. ColingACL, pages 113–120. Marius Pas¸ca and Benjamin Van Durme. 2007. What you seek is what you get: extraction of class attributes from query logs. In Proc. IJCAI, pages 2832–2837. Marius Pas¸ca and Benjamin Van Durme. 2008. Weakly-supervised acquisition of open-domain classes and class attributes from web documents and query logs. In Proc. ACL-08: HLT, pages 19–27. Michael Paul and Mark Dredze. 2011. You are what you tweet: Analyzing Twitter for public health. In Proc. ICWSM, pages 265–272. Marco Pennacchiotti and Ana-Maria Popescu. 2011. A machine learning approach to Twitter user classification. In Proc. ICWSM, pages 281–288. Xuan-Hieu Phan. 2006. CRFTagger: CRF English POS Tagger. crftagger.sourceforge.net. Delip Rao, David Yarowsky, Abhishek Shreevats, and Manaswi Gupta. 2010. Classifying latent user attributes in Twitter. In Proc. International Workshop on Search and Mining User-Generated Contents, pages 37–44. Delip Rao, Michael Paul, Clay Fink, David Yarowsky, Timothy Oates, and Glen Coppersmith. 2011. Hierarchical bayesian models for latent attribute detection in social media. In Proc. ICWSM, pages 598– 601. Joseph Reisinger and Marius Pas¸ca. 2009. Latent variable models of concept-attribute attachment. In Proc. ACL-IJCNLP, pages 620–628. Stephen D. Richardson, William B. Dolan, and Lucy Vanderwende. 1998. MindNet: Acquiring and structuring semantic information from text. In Proc. ACL-Coling, pages 1098–1102. Jonathan Schler, Moshe Koppel, Shlomo Argamon, and James W. Pennebaker. 2006. Effects of age and gender on blogging. In Proc. AAAI Spring Symposium: Computational Approaches to Analyzing Weblogs, pages 199–205. Lenhart Schubert. 2002. Can we derive general world knowledge from texts? In Proc. HLT, pages 84–87. Fabrizio Sebastiani. 2002. Machine learning in automated text categorization. ACM Comput. Surv., 34:1–47. Ian Soboroff, Dean McCullough, Jimmy Lin, Craig Macdonald, Iadh Ounis, and Richard McCreadie. 2012. Evaluating real-time search over tweets. In Proc. ICWSM. Wee Meng Soon, Hwee Tou Ng, and Daniel Chung Yong Lim. 2001. A machine learning approach to coreference resolution of noun phrases. Computational Linguistics, 27(4). Oscar T¨ackstr¨om, Ryan McDonald, and Jakob Uszkoreit. 2012. Cross-lingual word clusters for direct transfer of linguistic structure. In Proc. NAACLHLT, pages 477–487. Kosuke Tokunaga, Jun’ichi Kazama, and Kentaro Torisawa. 2005. Automatic discovery of attribute words from web documents. In Proc. IJCNLP, pages 106– 118. 719 Peter D. Turney. 2002. Thumbs up or thumbs down? Semantic orientation applied to unsupervised classification of reviews. In Proc. ACL, pages 417–424. Benjamin Van Durme, Ting Qian, and Lenhart Schubert. 2008. Class-driven attribute extraction. In Proc. Coling, pages 921–928. Benjamin Van Durme. 2012. Streaming analysis of discourse participants. In Proc. EMNLP-CoNLL, pages 48–58. Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2009. Recognizing contextual polarity: An exploration of features for phrase-level sentiment analysis. Computational Linguistics., 35(3):399–433. Fei Wu and Daniel S. Weld. 2007. Autonomously semantifying Wikipedia. In Proc. CIKM, pages 41– 50. Omar Zaidan, Jason Eisner, and Christine Piatko. 2007. Using “annotator rationales” to improve machine learning for text categorization. In Proc. NAACL-HLT. 720
2013
70
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 721–730, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics The Impact of Topic Bias on Quality Flaw Prediction in Wikipedia Oliver Ferschke†, Iryna Gurevych†‡ and Marc Rittberger‡ † Ubiquitous Knowledge Processing Lab Department of Computer Science, Technische Universit¨at Darmstadt ‡ Information Center for Education German Institute for Educational Research and Educational Information http://www.ukp.tu-darmstadt.de Abstract With the increasing amount of user generated reference texts in the web, automatic quality assessment has become a key challenge. However, only a small amount of annotated data is available for training quality assessment systems. Wikipedia contains a large amount of texts annotated with cleanup templates which identify quality flaws. We show that the distribution of these labels is topically biased, since they cannot be applied freely to any arbitrary article. We argue that it is necessary to consider the topical restrictions of each label in order to avoid a sampling bias that results in a skewed classifier and overly optimistic evaluation results. We factor out the topic bias by extracting reliable training instances from the revision history which have a topic distribution similar to the labeled articles. This approach better reflects the situation a classifier would face in a real-life application. 1 Introduction User generated content is the main driving force of the increasingly social web. Blogs, wikis and forums make up a large amount of the daily information consumed by web users. The main properties of user generated content are a low publication threshold and little or no editorial control, which leads to a high variance in quality. In order to navigate through large repositories of information efficiently and safely, users need a way to quickly assess the quality of the content. Automatic quality assessment has therefore become a key application in today’s information society. However, there is a lack of training data annotated with fine-grained quality information. Wikipedia, the largest encyclopedia on the web, contains so-called cleanup templates, which constitute a sophisticated system of user generated labels that mark quality problems in articles. Recently, these cleanup templates have been used for automatically identifying articles with particular quality flaws in order to support Wikipedia’s quality assurance process in Wikipedia. In a shared task (Anderka and Stein, 2012b), several systems have shown that it is possible to identify the ten most frequent quality flaws with high recall and fair precision. However, quality flaw detection based on cleanup template recognition suffers from a topic bias that is well known from other text classification applications such as authorship attribution or genre identification. We discovered that cleanup templates have implicit topical restrictions, i.e. they cannot be applied to any arbitrary article. As a consequence, corpora of flawed articles based on these templates are biased towards particular topics. We argue that it is therefore not sufficient for evaluating a quality flaw prediction systems to measure how well they can separate (topically restricted) flawed articles from a set of random outliers. It is rather necessary to determine reliable negative instances with a similar topic distribution as the set of positive instances in order to factor out the sampling bias. Related studies (Brooke and Hirst, 2011) have proven that topic bias is a confounding factor that results in misleading crossvalidated performance while allowing only near chance performance in practical applications. We present an approach for factoring out the bias from quality flaw corpora by mining reliable negative instances for each flaw from the article revision history. Furthermore, we employ the article revision history to extract reliable positive training instances by using the version of each article at the time it has first been identified as flawed. This way, we avoid including articles with outdated cleanup templates, a frequent phe721 nomenon that can occur when a template is not removed after fixing a problem in an article. In our experiments, we focus on neutrality and style flaws, since they are of particular high importance within the Wikipedia community (Stvilia et al., 2008; Ferschke et al., 2012a) and are recognized beyond Wikipedia in applications such as uncertainty recognition (Szarvas et al., 2012) and hedge detection (Farkas et al., 2010). 2 Related Work Topic bias is a known problem in text classification. Mikros and Argiri (2007) investigate the topic influence in authorship attribution. They found that even simple stylometric features, such as sentence and token length, readability measures or word length distributions show considerable correlations with topic. They argue that many features that were largely considered to be topic neutral are in fact topic-dependent variables. Consequently, results obtained on multitopic corpora are prone to be biased by the correlation of authors with specific topics. Therefore, several authors introduce topic-controlled corpora for applications such as author identification (Koppel and Schler, 2003; Luyckx and Daelemans, 2005) or genre detection (Finn and Kushmerick, 2006). Brooke and Hirst (2011) measure the topic bias in the International Corpus of Learner English and found that it causes a substantial skew in classifiers for native language detection. In accordance with Mikros et al., the authors found that even non-lexicalized meta features, such as vocabulary size or length statistics, depend on topics and cause cross-validated performance evaluations to be unrealistically high. In a practical setting, these biased classifiers hardly exceed chance performance. As already noted above, a similar kind of topic bias negatively influences quality flaw detection in Wikipedia. Anderka et al. (2012) automatically identify quality flaws by predicting the cleanup templates in unseen articles with a one-class classification approach. Based on this work, a competition on quality flaw prediction has been established (Anderka and Stein, 2012b). The winning team of the inaugural edition of the task was able to detect the ten most common quality flaws with an average F1-Score of 0.81 using a PU learning approach (Ferretti et al., 2012). With a binary classification approach, Ferschke et al. (2012b) achieved an average F1-Score of 0.80, while reaching a higher precision than the winning team. A closer examination of the aforementioned quality flaw detection systems reveals a systematic sampling bias in the training data, which leads to an overly optimistic performance evaluation and classifiers that are biased towards particular article topics. Our approach factors out the topic bias from the training data by mining topically controlled training instances from the Wikipedia revision history. The results show that flaw detection is a much harder problem in a real-life scenario. 3 Quality Flaws and Flaw Recognition in Wikipedia Quality standards in Wikipedia are mainly defined by the featured article criteria1 and the Wikipedia Manual of Style2. These policies define the characteristics excellent articles have to exhibit. Other sets of quality criteria are adaptations or relaxations of these standards, such as the good article criteria or the quality grading schemes of individual interest groups in Wikipedia. In this work, we focus on quality flaws regarding neutrality and style problems. We chose these categories due to their high importance within the Wikipedia community (Stvilia et al., 2008; Ferschke et al., 2012a) and due to their relevance to content outside of Wikipedia, such as blogs or online news articles. According to the Wikipedia policies3, an article has to be written from a neutral point of view. Thus, authors must avoid stating opinions and seriously contested assertions as facts, avoid presenting uncontested factual assertions as mere opinions, prefer nonjudgmental language and indicate the relative prominence of opposing views. Furthermore, authors have to adhere to the stylistic guidelines defined in the Manual of Style. While this subsumes a broad range of issues such as formatting and article structure, we focus on the style of writing and disregard mere structural properties. Any articles that violate these criteria can be marked with cleanup templates4 to indicate their need for improvement. These templates can thus be regarded as proxies for quality flaws in Wikipedia. 1http://en.wikipedia.org/wiki/WP:FACR 2http://en.wikipedia.org/wiki/WP:STYLE 3http://en.wikipedia.org/wiki/WP:NPOV 4http://en.wikipedia.org/wiki/WP:TM#Cleanup 722 Flaw Description Articles Templates Advert The article appears to be written like an advertisement and is thus not neutral 7,332 2 POV The neutrality of this article is disputed 5,086 10 Globalize The article may not represent a worldwide view of the subject 1,609 1 Peacock The article may contain wording that merely promotes the subject without imparting verifiable information 1,195 1 Neutrality Weasel The article contains vague phrasing that often accompanies biased or unverifiable information 704 4 Tone The tone of the article is not encyclopedic according to the Wikipedia Manual of Style 4,563 6 In-universe The article describes a work or element of fiction in a primarily in-universe stylea 2,227 1 Copy-edit The article requires copy editing for grammar, style, cohesion, tone, or spelling 1,954 6 Trivia Contains lists of miscellaneous information 1,282 2 Essay-like The article is written like a personal reflection or essay 1,244 1 Confusing The article may be confusing or unclear to readers 1,084 1 Style Technical The article may be too technical for most readers to understand 690 2 a According to the Wikipedia Manual of Style, an in-universe perspective describes the article subject matter from the perspective of characters within a fictional universe as if it were real. Table 1: Neutrality and style flaw corpora used in this work Template Clusters Since several cleanup templates might represent different manifestations of the same quality flaw, there is a 1 to n relationship between quality flaws and cleanup templates. For instance, the templates pov-check5, pov6 and npov language7 can all be mapped to the same flaw concerning the neutral point of view of an article. This aggregation of cleanup templates into flaw-clusters is a subjective task. It is not always clear whether a particular template refers to an existing flaw or should be regarded as a separate class. Too many clusters will cause definition overlaps (i.e. similar cleanup templates are assigned to different clusters), while too few clusters will result in unclear flaw definitions, since each flaw receives a wide range of possible manifestations. Template Scope Another important aspect to be considered is the difference in the scope which cleanup templates can have. Inline-templates are placed directly in the text and refer to the sentence or paragraph they are placed in. Templates with a section parameter, refer to the section they are placed in. The majority of templates, however, refer to a whole page. The consideration of the template scope is of particular importance for quality flaw recognition problems. For example, the presence of a cleanup template which marks a single section as not notable does not entail that the whole article is not notable. 5The article has been nominated for a neutrality check 6The neutrality of the article is disputed 7The article contains a non-neutral style of writing Topical Restriction A final aspect that has not been taken into account by related work is that many cleanup templates have restrictions concerning the pages they may be applied to. A hard restriction is the page type (or namespace) a template might be used in. For example, some templates can only be used in articles while others can only be applied to discussion pages. This is usually enforced by maintenance scripts running on the Wikimedia servers. A soft restriction, on the other hand, are the topics of the articles a template can be used in. Many cleanup templates can only be applied to articles from certain subject areas. An example with a particularly obvious restriction is the template in-universe (see Table 1), which should only be applied to articles about fiction. This topical restriction is neither explicitly defined nor automatically enforced, but it plays an important role in the quality flaw recognition task, as the remainder of this paper will show. While flaws merely concerning the structural or linguistic properties of an article are less restricted to individual topics, they are still affected by a certain degree of topical preference. Many subject areas in Wikipedia are organized in WikiProjects8, which have their own ways of reviewing and ensuring quality within their topical scope. Depending on the quality assurance processes established in a WikiProject, different importance is given to individual types of flaws. Thus, the distribution of cleanup templates regarding structural or grammatical flaws is also biased towards certain topics. 8http://en.wikipedia.org/wiki/WP:PROJ 723 We will henceforth subsume the concept of topical preference under the term topical restriction. Quality Flaw Recognition Based on the above definition of quality flaws, we define the quality flaw recognition task similar to Anderka et al. (2012) as follows: Given a sample of articles in which each article has been tagged with any cleanup template τi from a specific template cluster T f thus marking all articles in the sample with a quality flaw f, it has to be decided whether or not an unseen article suffers from f. 4 Data Selection and Corpus Creation For creating our corpora, we start with selecting all cleanup templates listed under the categories neutrality and style in the typology of cleanup templates provided by Anderka and Stein (2012a). Each of the selected templates serves as the nucleus of a template cluster that potentially represents a quality flaw. To each cluster, we add all templates that are synonymous to the nucleus. The synonyms are listed in the template description under redirects or shortcuts. Then we iteratively add all synonyms of the newly added template until no more redirects can be found. Furthermore, we manually inspect the lists of similar templates in the see also sections of the template descriptions and include all templates that refer to the same concept as the other templates in the cluster. As mentioned earlier, this is a subjective task and largely depends on the desired granularity of the flaw definitions. We finally merge semantically similar template clusters to avoid too fine grained flaw distinctions. As a result, we obtain a total number of 94 template clusters representing 60 style flaws and 34 neutrality flaws. From each of these clusters, we remove templates with inline or section scope due to the reasons outlined in Section 3. We also remove all templates that are restricted to pages other than articles (e.g. discussion or user pages). We use the Java Wikipedia Library (Zesch et al., 2008) to extract all articles marked with the selected templates. We only regard flaws with at least 500 affected articles in the snapshot of the English Wikipedia from January 4, 2012. Table 1 lists the final sets of flaws used in this work. For each flaw, the nucleus of the template cluster is provided along with a description, the number of affected articles, and the cluster size. We make the corpora freely available for downFlaw κ F1 Advert .60 .80 Confusing .60 .80 Copy-edit .00 .50 Essay-like .60 .80 Globalize: .60 .80 In-universe .80 .90 Peacock .70 .84 POV .60 .80 Technical .90 .95 Tone .40 .70 Trivia .20 .60 Weasel .50 .74 Table 2: Agreement of human annotator with gold standard load under http://www.ukp.tu-darmstadt. de/data/wiki-flaws/. Agreement with Human Rater Quality flaw detection in Wikipedia is based on the assuption that cleanup templates are valid markers of quality flaws. In order to test the reliability of these user assigned templates as quality flaw markers, we carried out an annotation study in which a human annotator was asked to perform the binary flaw detection task manually. Even though the human performance does not necessarily provide an upper boundary for the automatic classification task, it gives insights into potentially problematic cases and ill-defined annotations. The annotator was provided with the template definitions from the respective template information page as instructions. For each of the 12 article scope flaws, we extracted the plain text of 10 random flawed articles and 10 random untagged articles. The annotator had to decide for each flaw individually whether a given text belonged to a flawed article or not. She was not informed about the ratio of flawed to untagged articles. Table 2 lists the chance corrected agreement (Cohen’s κ) along with the F1 performance of the human annotations against the gold standard corpus. The templates copy-edit and trivia yielded the lowest performance in the study. Even though copy-edit templates are assigned to whole articles, they refer to grammatical and stylistic problems of relatively small portions of the text. This increases the risk of overlooking a problematic span of text, especially in longer articles. The trivia template, on the other hand, designates sections that contain miscellaneous information that are not well integrated in the article. Upon manual inspection, we found a wide range of possible manifestations of 724 this flaw ranging from an agglomeration of incoherent factoids to well-structured sections that did not exactly match the focus of the article, which is the main reason for the low agreement. 5 Selection of Reliable Training Instances Independent from the classification approach used to identify flawed articles, reliable training data is the most important prerequisite for good predictions. On the one hand, we need a set of examples that reliably represent a particular flaw, while on the other hand, we need counterexamples which reliably represent articles that do not suffer from the same flaw. The latter aspect is most important for discriminative classification approaches, since they rely on negative instances for training the classifier. However, reliable negative instances are also important for one-class classification approaches, since it is only for the counterexamples (or outliers) that the performance of one-class classifiers can be sufficiently evaluated. It is furthermore important that the positive and the negative instances do not differ systematically in any respect other than the presence or absence of the respective flaws, since any systematic difference will bias the classifier. In this context, the topical restrictions of cleanup templates have to be taken into account. In the following, we describe our approach to extracting reliable training instances from the quality flaw corpora. 5.1 Reliable Positives In previous work, the latest available versions of flawed articles have been used as positive training instances. However, we found upon manual inspection of the data that a substantial number of articles has been significantly edited between the time tτ, at which the template was first assigned, and the time te, at which the articles have been extracted. Using the latest version at time te can thus include articles in which the respective flaw has already been fixed without removing the cleanup template. Therefore, we use the revision of the article at time tτ to assure that the flaw is still present in the training instance. We use the Wikipedia Revision Toolkit (Ferschke et al., 2011), an enhancement of the Java Wikipedia Library, to gain access to the revision history of each article. For every article in the corpus of positive examples for flaw f that is marked with template τ ∈T f , we backtrack the revision history chronologically, until we find the first revision rtτ−1 that is not tagged with τ . We then add the succeeding revision rtτ to the corpus of reliable positives for flaw f. In Section 6, we show that the classification performance improves for most flaws when using reliable positives instead of the latest available article versions. 5.2 Reliable Negatives and Topical Restriction A central problem of the quality flaw recognition approach is the fact that there are no articles available that are tagged to not contain a particular quality problem. So far, two solutions to this issue have been proposed in related work. Anderka et al. (2012) tackle the problem with a one-class classifier that is trained on the positive instances alone thus eradicating the need for negative instances in the training phase. However, in order to evaluate the classifier, a set of outliers is needed. The authors circumvent this issue by evaluating their classifiers on a set of random untagged instances and a set of featured articles and argue that the actual performance of predicting the quality flaws lies between the two. Ferretti et al. (2012) follow a two step classification approach (PU learning) that first uses a Naive Bayes classifier trained on positive instances and random untagged articles to pre-classify the data. In a second phase, they use the negatives identified by the Naive Bayes classifier to train a Support Vector Machine that produces the final predictions. Even though the Naive Bayes classifier was supposed to identify reliable negatives, the authors found no significant improvement over a random selection of negative instances, which effectively renders the PU learning approach redundant. None of the above approaches consider the issue of topical restriction mentioned in Section 3, which introduces a systematic bias to the data. Both approaches sample random negative instances Arnd for any given set of flawed articles A f from a set of untagged articles Au (see Fig. 1a). In order to factor out the article topics as a major characteristic for distinguishing flawed articles from the set of outliers, reliable negative instances Arel have to be sampled from the restricted topic set Atopic that contains articles with a topic distribution similar to the flawed articles in A f (see Fig. 1b). This will avoid the systematic bias and 725 (a) Random negatives (b) Reliable negatives Figure 1: Sampling of negative instances for a given set of flawed articles (A f ). Random negatives (Arnd) are sampled from articles without any cleanup templates (Au). Reliable negatives (Arel) are sampled from the set of articles (Atopic) with the same topic distribution as A f result in a more realistic performance evaluation. In the following, we present our approach to extracting reliable negative training instances that conform with the topical restrictions of the cleanup templates. Without loss of generality, we assume that an article, from which a cleanup template τ ∈T f is deleted at a point in time dτ, the article no longer suffers from flaw f at that point in time. Thus, the revision rdτ is a reliable negative instance for the flaw f. Additionally, since the article was once tagged with τ ∈T f , it belongs to the the same restricted topic set Atopic as the positive instances for flaw f. We use the Apache Hadoop9 framework and WikiHadoop10, an input format for Wikipedia XML dumps, for crawling the whole revision history of the English Wikipedia on a compute cluster. WikiHadoop allows each Hadoop mapper to receive adjacent revision pairs, which makes it possible to compare the changes made from one revision to the next. For every template τ ∈T f , we extract all adjacent revision pairs (rdτ−1, rdτ), in which the first revision contains τ and the second does not contain τ. Since there are occasions in which a template is replaced by another template from the same cluster, we ensure that rdτ does also not contain any other template from cluster T f before we finally add the revision to the set of reliable negatives for flaw f. In the remainder of this section, we evaluate the topical similarity between the positive and the negative set of articles for each flaw using both our method and the original approach. In Wikipedia, 9http://hadoop.apache.org 10https://github.com/whym/wikihadoop the topic of an article is captured by the categories assigned to it. In order to compare two sets of articles with respect to their topical similarity, we represent each article set as a category frequency vector. Formally, we calculate for each set the vector ⃗C = (wc1, wc2, . . . , wcn) with wci being the weight of category ci, i.e. the number of times it occurs in the set, and n being the total number of categories in Wikipedia. We can then estimate the topical similarity of two article sets by calculating the cosine similarity of their category frequency vectors ⃗C1 B A and ⃗C2 B B as sim(A, B) = A · B ∥A∥∥B∥= nP i=1 Ai × Bi r nP i=1 (Ai)2 × r nP i=1 (Bi)2 Table 3 gives an overview of the similarity scores between each positive training set and the corresponding reliable negative set as well as between each positive set and a random set of untagged articles. We can see that the topics of articles in the positive training sets are highly similar to the topics of the corresponding reliable negative articles while they show little similarity to the articles in the random set. This implies that the systematic bias introduced by the topical restriction has largely been eradicated by our approach. Individual flaws have differently strong topical restrictions. The strength of this restriction depends on the size of Atopic. That is, a flaw such as in-universe is restricted to a very narrow selection of articles, while a flaw such as copy edit can be applied to most articles and rather shows a topical preference due to reasons outlined in Section 3. It 726 Cosine Similarity Flaw (Af , Arel) (A f , Arnd) Advert .996 .118 Confusing .996 .084 Copy-edit .993 .197 Essay-like .996 .132 Globalize .992 .023 In-universe .996 .014 Peacock .995 .310 POV .994 .252 Technical .995 .018 Tone .996 .228 Trivia .980 .184 Weasel .976 .252 Table 3: Cosine similarity scores between the category frequency vectors of the flawed article sets and the respective random or reliable negatives is therefore to be expected that that flaws with a small Atopic are more prone to the topic bias. 6 Experiments In the following, we describe our system architecture and the setup of our experiments. Our system for quality flaw detection follows the approach by Ferschke et al. (2012b), since it has been particularly designed as a modular system based on the Unstructured Information Management Architecture11, which makes it easy to extend. Instead of using Mallet (McCallum, 2002) as a machine learning toolkit, we employ the Weka Data Mining Software (Hall et al., 2009) for classification, since it offers a wider range of state-of-the-art machine learning algorithms. For each of the 12 quality flaws, we employ three different dataset configurations. The BASE configuration uses the newest version of each flawed article as positive instances and a random set of untagged articles as negative instances. The RELP configuration uses reliable positives, as described in Section 5.1, in combination with random outliers. Finally, the RELALL configuration employs reliable positives in combination with the respective reliable negatives as described in Section 5.2. Features An extensive survey of features for quality flaw recognition has been provided by Anderka et al. (2012). We selected a subset of these features for our experiments and grouped them into four feature sets in order to determine how well different combinations of features perform in the task. 11http://uima.apache.org Category Feature type NONGRAM NGRAM NOWIKI ALL Lexical Article ngrams • • • Info to noise ratio • • • Network # External links • • # Outlinks • • # Outlinks per sentence • • # Language links • • References Has reference list • • # References • • # References per sentence • • Revision # Revisions • • # Unique contributors • • Structure # Empty sections • • Mean section size • • # Sections • • # Lists • • Question rate • • • Readability ARI • • • Coleman-Liau • • • Flesch • • • Flesch-Kincaid • • • Gunning Fog • • • Lix • • • SMOG-Grading • • • Named Entity # Person entities∗ • • • # Organization entities∗ • • • # Location entities∗ • • • Misc # Characters • • • # Sentences • • • # Tokens • • • Average sentence length • • • Article lead length • • Lead to article ratio • • # Discussions • • ∗newly introduced feature # number of instances Table 4: Feature sets used in the experiments Table 4 lists all feature types used in our experiments. Since the feature space becomes large due to the ngram features, we prune it in two steps. First, we filter the ngrams according to their document frequency in the training corpus. We discard all ngrams that occur in less than x% and more than y% of all documents. Several values for x and y have been evaluated in parameter tuning experiments. The best results have been achieved with x=2 and y=90. In a second step, we apply the Information Gain feature selection approach (Mitchell, 1997) to the remaining set to determine the most useful features. Learning Algorithms We evaluated several learning algorithms from the Weka toolkit with respect to their performance on 727 Algorithm Average F1 SVM RBF Kernel 0.82 AdaBoost (decision stumps) 0.80 SVM Poly Kernel 0.79 RBF Network 0.78 SVM Linear Kernel 0.77 SVM PUK Kernel 0.76 J48 0.75 Naive Bayes 0.72 MultiBoostAB (decision stumps) 0.71 Logistic Regression 0.60 LibSVM One Class 0.67 Table 5: Average F1-scores over all flaws on RELP using all features the quality flaw recognition task. Table 5 shows the average F1-score of each algorithm on the RELP dataset using all features. The performance has been evaluated with 10-fold cross validation on 2,000 documents split equally into positive and negative instances. One class classifiers are trained on the positive instances alone. We determined the best parameters for each algorithms in a parameter optimization run and list the results of the best configuration. Overall, Support Vector Machines with RBF kernels yielded the best average results and outperformed the other algorithms on every flaw. We used a sequential minimal optimization (SMO) algorithm (Platt, 1998) to train the SVMs and used different γ-values for the RBF kernel function. In contrast to Ferretti et al. (2012), we did not see significant improvements when optimizing γ for each individual flaw, so we determined one best setting for each dataset. Since SVMs with RBF kernels are a special case of RBF networks that fit a single basis function to the data, we also used general RBF networks that can employ multiple basis functions, but we did not achieve better results with that approach. One-class classification, as proposed by Anderka et al. (2012), did not perform well within our setup. Even though we used an out-of-thebox one class classifier, we achieve similar results as Anderka et al. in their pessimistic setting, which best resembles our configuration. However, the performance still lacks behind the other approaches in our experiments. The best performing algorithm reported by Ferschke et al. (2012b), AdaBoost with decision stumps as a weak learner, showed the second best results in our experiments. 7 Evaluation and Discussion The SVMs achieve a similar cross-validated performance on all feature sets containing ngrams, showing only minor improvements for individual flaws when adding non-lexical features. This suggests that the classifiers largely depend on the ngrams and that other features do not contribute significantly to the classification performance. While structural quality flaws can be well captured by special purpose features or intensional modeling, as related work has shown, more subtle content flaws such as the neutrality and style flaws are mainly captured by the wording itself. Textual features beyond the ngram level, such as syntactic and semantic qualities of the text, could further improve the classification performance of these flaws and should be addressed in future work. Table 6 shows the performance of the SVMs with RBF kernel12 on each dataset using the NGRAM feature set. The average performance based on NOWIKI is slightly lower while using ALL features results in slightly higher average F1-scores. However, the differences are not statistically significant and thus omitted. Classifiers using the NONGRAM feature set achieved average F1-scores below 0.50 on all datasets. The results have been obtained by 10-fold cross validation on 2,000 documents per flaw. The classifiers trained on reliable positives and random untagged articles (RELP) outperform the respective classifiers based on the BASE dataset for most flaws. This confirms our original hypothesis that using the appropriate revision of each tagged article is superior to using the latest available version from the dump. The performance on the RELALL dataset, in which the topic bias has been factored out, yields lower F1-scores than the two other approaches. Flaws that are restricted to a very narrow set of topics (i.e. Atopic in Fig. 1b is small), such as the in-universe flaw, show the biggest drop in performance. Since the topic bias plays a major role in the quality flaw detection task, as we have shown earlier, the topiccontrolled classifier cannot take advantage of the topic information, while the classifiers trained on the other corpora can make use of these characteristic as the most discriminative features. In the RELALL setting, however, the differences between the positive and negative instances are largely determined by the flaws alone. Classifiers trained on 12γ=0.01 for BASE,RELP and γ=0.001 for RELALL 728 such a dataset therefore come closer to recognizing the actual quality flaws, which makes them more useful in a practical setting despite lower cross-validated scores. In addition to cross-validation, we performed a cross-corpus evaluation of the classifiers for each flaw. Therefore, we evaluated the performance of the unbiased classifiers (trained on RELALL) on the biased data (RELP) and vice versa. Hereby, the positive training and test instances remain the same in both settings, while the unbiased data contains negative instances sampled from Arel and the unbiased data from Arnd (see Figure 1). With the NGRAM feature set, the reliable classifiers outperformed the unreliable classifiers on all flaws that can be well identified with lexical cues, such as Advert or Technical. In the biased case, we found both topic related and flaw specific ngrams among the most highly ranked ngram features. In the unbiased case, most of the informative ngrams were flaw specific expressions. Consequently, biased classifiers fail on the unbiased dataset in which the positive and negative class are sampled from the same topics, which renders the highly ranked topic ngrams unusable. Flaws that do not largely rely on lexical cues, however, cannot be predicted more reliably with the unbiased classifier. This means that additional features are needed to describe these flaw. We tested this hypothesis by using the full feature set ALL and saw a substantial improvement on the side of the unbiased classifier, while the performance of the biased classifier remained unchanged. A direct comparison of our results to related work is difficult, since neutrality and style flaws have not been targeted before in a similar manner. However, the Advert flaw was also part of the ten flaw types in the PAN Quality Flaw Recognition Task (Anderka and Stein, 2012b). The best system achieved an F1 score of 0.839, which is just below the results of our system on the BASE dataset, which is similar to the PAN setup. 8 Conclusions We showed that text classification based on Wikipedia cleanup templates is prone to a topic bias which causes skewed classifiers and overly optimistic cross-validated evaluation results. This bias is known from other text classification applications, such as authorship attribution, genre detection and native language detection. We demonFlaw BASE RELP RELALL Advert .86 .88 .75 Confusing .76 .80 .70 Copy edit .81 .73 .72 Essay-like .79 .83 .64 Globalize .85 .87 .69 In-universe .96 .96 .69 Peacock .77 .82 .69 POV .75 .80 .71 Technical .87 .88 .67 Tone .70 .79 .69 Trivia .72 .77 .70 Weasel .69 .77 .72  .79 .83 .70 Table 6: F1 scores for the 10-fold cross validation of the SVMs with RBF kernel on all datasets using NGRAM features strated how to avoid the topic bias when creating quality flaw corpora. Unbiased corpora are not only necessary for training unbiased classifiers, they are also invaluable resources for gaining a deeper understanding of the linguistic properties of the flaws. Unbiased classifiers reflect much better the performance of quality flaw recognition “in the wild”, because they detect actual flawed articles rather than identifying the articles that are prone to certain quality due to their topic or subject matter. In our experiments, we presented a system for identifying Wikipedia articles with style and neutrality flaws, a novel category of quality problems that is of particular importance within and outside of Wikipedia. We showed that selecting a reliable set of positive training instances mined from the revision history improves the classification performance. In future work, we aim to extend our quality flaw detection system to not only find articles that contain a particular flaw, but also to identify the flaws within the articles, which can be achieved by leveraging the positional information of in-line cleanup templates. Acknowledgments This work has been supported by the Volkswagen Foundation as part of the LichtenbergProfessorship Program under grant No. I/82806, and by the Hessian research excellence program “Landes-Offensive zur Entwicklung Wissenschaftlich- ¨Okonomischer Exzellenz” (LOEWE) as part of the research center ”Digital Humanities”. 729 References Maik Anderka and Benno Stein. 2012a. A Breakdown of Quality Flaws in Wikipedia. In 2nd Joint WICOW/AIRWeb Workshop on Web Quality, pages 11–18, Lyon, France. Maik Anderka and Benno Stein. 2012b. Overview of the 1st International Competition on Quality Flaw Prediction in Wikipedia. In CLEF 2012 Evaluation Labs and Workshop – Working Notes Papers. Maik Anderka, Benno Stein, and Nedim Lipka. 2012. Predicting Quality Flaws in User-generated Content: The Case of Wikipedia. In 35th International ACM Conference on Research and Development in Information Retrieval, Portland, OR, USA. Julian Brooke and Graeme Hirst. 2011. Native language detection with ’cheap’ learner corpora. In Learner Corpus Research 2011 (LCR 2011). Rich´ard Farkas, Veronika Vincze, Gy¨orgy M´ora, J´anos Csirik, and Gy¨orgy Szarvas. 2010. The CoNLL2010 shared task: learning to detect hedges and their scope in natural language text. In Proceedings of the Fourteenth Conference on Computational Natural Language Learning, CoNLL ’10: Shared Task, pages 1–12, Stroudsburg, PA, USA. Association for Computational Linguistics. Edgardo Ferretti, Donato Hern´andez Fusilier, Rafael Guzm´an-Cabrera, Manuel Montes y G´omez, Marcelo Errecalde, and Paolo Rosso. 2012. On the Use of PU Learning for Quality Flaw Prediction in Wikipedia. In CLEF 2012 Evaluation Labs and Workshop – Working Notes Papers. Oliver Ferschke, Torsten Zesch, and Iryna Gurevych. 2011. Wikipedia Revision Toolkit: Efficiently Accessing Wikipedia’s Edit History. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. System Demonstrations, pages 97–102, Portland, OR, USA. Oliver Ferschke, Iryna Gurevych, and Yevgen Chebotar. 2012a. Behind the Article: Recognizing Dialog Acts in Wikipedia Talk Pages. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 777– 786, Avignon, France. Oliver Ferschke, Iryna Gurevych, and Marc Rittberger. 2012b. FlawFinder: A Modular System for Predicting Quality Flaws in Wikipedia. In CLEF 2012 Evaluation Labs and Workshop – Working Notes Papers, Rome, Italy. Aidan Finn and Nicholas Kushmerick. 2006. Learning to classify documents according to genre. Journal of the American Society for Information Science and Technology, 57(11):1506–1518. Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H. Witten. 2009. The WEKA Data Mining Software: An Update. SIGKDD Explorations, 11(1):10–18. Moshe Koppel and Jonathan Schler. 2003. Exploiting stylistic idiosyncrasies for authorship attribution. In Workshop on Computational Approaches to Style Analysis and Synthesis, pages 69–72. K. Luyckx and W. Daelemans. 2005. Shallow text analysis and machine learning for authorship attribution. In Proceedings of the Fifteenth Meeting of Computational Linguistics in the Netherlands (CLIN 2004), pages 149–160. Andrew Kachites McCallum. 2002. MALLET: A Machine Learning for Language Toolkit. George K. Mikros and Eleni K. Argiri. 2007. Investigating topic influence in authorship attribution. In Proceedings of the SIGIR 2007 International Workshop on Plagiarism Analysis, Authorship Identification, and Near-Duplicate Detection, PAN 2007, Amsterdam, Netherlands. Thomas Mitchell. 1997. Machine Learning. McGrawHill Education, New York, NY, USA, 1st edition. John C Platt. 1998. Fast training of support vector machines using sequential minimal optimization. In Advances in Kernel Methods: Support Vector Learning, pages 185–208, Cambridge, MA, USA. Besiki Stvilia, Michael B. Twidale, Linda C. Smith, and Les Gasser. 2008. Information Quality Work Organization in Wikipedia. Journal of the American Society for Information Science and Technology, 59(6):983–1001. Gy¨orgy Szarvas, Veronika Vincze, Rich´ard Farkas, Gy¨orgy M´ora, and Iryna Gurevych. 2012. Crossgenre and cross-domain detection of semantic uncertainty. Comput. Linguist., 38(2):335–367. Torsten Zesch, Christof M¨uller, and Iryna Gurevych. 2008. Extracting Lexical Semantic Knowledge from Wikipedia and Wiktionary. In Proceedings of the 6th International Conference on Language Resources and Evaluation, Marrakech, Morocco. 730
2013
71
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 731–741, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Mining Informal Language from Chinese Microtext: Joint Word Recognition and Segmentation Aobo Wang1 Min-Yen Kan1,2∗ 1 Web IR / NLP Group (WING) 2 Interactive and Digital Media Institute (IDMI) National University of Singapore 13 Computing Link, Singapore 117590 {wangaobo,kanmy}@comp.nus.edu.sg Abstract We address the problem of informal word recognition in Chinese microblogs. A key problem is the lack of word delimiters in Chinese. We exploit this reliance as an opportunity: recognizing the relation between informal word recognition and Chinese word segmentation, we propose to model the two tasks jointly. Our joint inference method significantly outperforms baseline systems that conduct the tasks individually or sequentially. 1 Introduction User generated content (UGC) – including microblogs, comments, SMS, chat and instant messaging – collectively referred to as microtext by Gouwset et al. (2011) or network informal language by Xia et al. (2005), is the hallmark of the participatory Web. While a rich source that many applications are interested in mining for knowledge, microtext processing is difficult to process. One key reason for this difficulty is the ubiquitous presence of informal words – anomalous terms that manifest as ad hoc abbreviations, neologisms, unconventional spellings and phonetic substitutions. Such informality is often present in oral conversation, and user-generated microblogs reflect this informality. Natural language processing (NLP) tools largely fail to work properly on microtext, as they have largely been trained on formally written text (i.e., newswire). Recent work has started to address these shortcomings (Xia and Wong, 2006; Kobus et al., 2008; Han and Baldwin, 2011). Informal words and their usage in microtext evolves quickly, following social trends and news events. ∗This research is supported by the Singapore National Research Foundation under its International Research Centre @ Singapore Funding Initiative and administered by the IDM Programme Office. These characteristics make it difficult for lexicographers to compile lexica to keep with the pace of language change. We focus on this problem in the Chinese language. Through our analysis of a gathered Chinese microblog corpus, we observe that Chinese informal words originate from three primary sources, as given in Table 1. But unlike noisy words in English, Chinese informal words are more difficult to mechanically recognize for two critical reasons: first, Chinese does not employ word delimiters; second, Chinese informal words combine numbers, alphabetic letters and Chinese characters. Techniques for English informal word detection that rely on word boundaries and informal word orthography need to be adapted for Chinese. Consider the microtext “ •g†” (meaning “Don’t tell me the spoilers (to a movie or joke)”, also in Table 1). If “ •” (“don’t”) and “ †” (past tense marker) are correctly recognized as two words, we may predict the previously unseen characters “g” (“tell spoilers”) as an informal word, based on the learned Chinese language patterns. However, state-of-the-art Chinese segmenters1 incorrectly yield “ • g †”, preferring to chunk “ †” (“thoroughly”) as a word, as they do not consider the possibility that “g” (“spoiler”) could be an informal word. This example illustrates the mutual dependency between Chinese word segmentation (henceforth, CWS) and informal word recognition (IWR) that should be solved jointly. Hence, rather than pipeline the two processes serially as previous work, we formulate it as a twolayer sequential labeling problem. We employ factorial conditional random field (FCRF) to solve both CWS and IWR jointly. To our best knowledge, this is the first work that shows how Chinese microtext can be analyzed from raw text to 1http://www.ictclas.org/index.html 731 Table 1: Our classification of Chinese informal words as originating from three primary sources. For Phonetic Substitutions, pronunciation is indicated by the phonetic Pinyin transcription system. Informal Word Formal Word Example Sentence English Translation 1) Phonetic ( (mu4 you3) ¡ (mei2 you3) Ñ:( ( ( úßf No taxi in the development area Substitutions i¸ì(hai2 zhi3 men) iPì(hai2 zi men) wІi i i¸ ¸ ¸ì ì ì Get up kids bs Æ(bi shi) bs` I despise you 2) Abbreviation L8 Lb8 eL L L8 8 8' Let’s play board games g gÅ2 •g g g  † Don’t tell (me) the spoilers 3) Neologisms Ù› ˆÒ Ù Ù Ù› › ›J So awesome! Ò@ Å-p ¦Ò Ò Ò@ @ @ƒ Quickly purchase it derive joint solutions for both problems of CWS and IWR. We also propose novel features for input to the joint inference. Our techniques significantly outperform both research and commercial state-of-the-art for these problems, including twostep linear CRF baselines which perform the two tasks sequentially. We detail our methods in Section 2. In Section 3, we first describe the details of our dataset and baseline systems, followed by demonstrating two sets of experiments for CWS and IWR, respectively. Section 4 offers the discussion on error analysis and limitations. We discuss related work in Section 5, before concluding our paper. 2 Methodology Given an input Chinese microblog post, our method simultaneously segments the sentences into words (the Chinese Word Segmentation, CWS, task), and marks the component words as informal or formal ones (the Informal Word Recongition, IWR, task). 2.1 Problem Formalization The two tasks are simple to formalize. The IWR task labels each Chinese character with either an F (part of a formal word) or IF (informal word). For the CWS task, we follow the widely-used BIES coding scheme (Low et al., 2005; Hai et al., 2006), where B, I, E and S stand for beginning of a word, inside a word, end of a word and singlecharacter word, respectively. As a result, we have two (hidden) labels to associate with each (observable) character. Figure 1 illustrates an example microblog post graphically, where the labels are in circles and the observations are in squares. The two informal words in the example post are “( ” (normalized form: “¡ ”; English gloss: “no”) and “rp” (“ºÁ<”; “luck”). 2.2 Conditional Random Field Models Given the general performance and discriminative framework, Conditional Random Fields (CRFs) (Lafferty et al., 2001) is a suitable framework for tackling sequence labeling problems. Other alternative frameworks such as Markov Logic Networks (MLNs) and Integer Linear Programming (ILP) could also be considered. However, we feel that for this task, formulating efficient global formulas (constraints) for MLN (ILP) is comparatively less straightforward than in other tasks (e.g, compared to Semantic Role Labeling, where the rules may come directly from grammatical constraints). CRFs represent a basic, simple and well-understood framework for sequence labeling, making it a suitable framework for adapting to perform joint inference. 2.2.1 Linear-Chain CRF A linear-chain CRF (LCRF; Figure 2a) predicts the output label based on feature functions provided by the scientist on the input. In fact, the LCRF has been used for the exact problem of CWS (Sun and Xu, 2011), garnering state-of-theart performance, and as such, validate it as a strong baseline for comparison. 2.2.2 Factorial CRF To properly model the interplay between the two sub-problems, we employ the factorial CRF (FCRF) model, which is based on the dynamic CRF (DCRF) (Sutton et al., 2007). By introducing a pairwise factor between different variables at each position, the FCRF model results as a special case of the DCRF. A FCRF captures the joint distribution among various layers and jointly predicts across layers. Figure 2 illustrates both the LCRF and FCRF models, where cliques include within-chain edges (e.g., yt, yt+1) in both LCRF and FCRF models, and the between-chain edges (e.g., yt, zt) only in the FCRF. 732 开 啊 低 值 品 人 , 车 租 出 有 没 区 发 F F IF IF F F F F IF IF F F F F I E I B S E I B E B E S B S 开 啊 低 值 p r , 车 租 出 有 木 区 发 There character my poor how , zone development the in taxi no is is Figure 1: A Chinese microtext (bottom layer) with annotations for IWR (top layer) and CWS (middle layer). The bottom three lines give the normalized Chinese form, its pronuniciation in Pinyin and aligned English translation. yt xt yt+1 xt+1 yt-1 xt-1 (a) Linear-chain CRF yt xt yt+1 xt+1 yt-1 xt-1 zt zt+1 zt-1 (b) Two-layer Factorial CRF Figure 2: Graphical representations of the two types of CRFs used in this work. yt denotes the 1st layer label, zt denotes the 2nd layer label, and xt denotes the observation sequence. Although the FCRF can be collapsed into a LCRF whose state space is the cross-product of the outcomes of the state variables (i.e., 8 labels in this case), Sutton et al. (2007) noted that such a LCRF requires not only more parameters in the number of variables, but also more training data to achieve equivalent performance with an FCRF. Given the limited scale of the state space and training data, we follow the FCRF model, using exact Junction Tree (Jensen, 1996) inference and decoding algorithm to perform prediction. 2.3 CRF Features We use three broad feature classes – lexical, dictionary-based and statistical features – aiming to distinguish the output classes for the CWS and IRW problems. Character-based sequence labeling is employed for word segmentation due to its simplicity and robustness to the unknown word problem (Xue, 2003). A key contribution of our work is also to propose novel features for joint inference. We propose new features for the dictionary-based and statistical feature classes, which we have marked in the discussion below with “(*)”. We later examine their efficacy in Section 3. Lexical Features. As a foundation, we employ lexical (n-gram) features informed by the previous state-of-the-art for CWS (Sun and Xu, 2011; Low et al., 2005). These features are listed below2: • Character 1-gram: Ck(i −4 < k < i + 4) • Character 2-gram: CkCk+1(i −4 < k < i + 3) • Character 3-gram: CkCk+1Ck+2(i −3 < k < i + 2) • Character lexicon: C−1C1 This feature is used to capture the common indicators in Chinese interrogative sentences. (e.g., “/ /” (“whether or not”), “} }” (“OK or not”)) • Whether Ck and Ck+1 are identical, for i − 4 < k < i + 3. This feature is used to capture the words of employing character doubling in Chinese. (e.g., “ÜÜ” (“see you”), “))” (“every day”)) Dictionary-based Features. We use features that indicate whether the input character sequence 2For notational convenience, we denote a candidate character token Ci as having a context ...Ci−1CiCi+1.... We use Cm:n to express a subsequence starting at the position m and ending at n. len stands for the length of the subsequence, and offset denotes the position offset of Cm:n from the current character Ci. We use b (beginning), m (middle) and e (ending) to indicate the position of Ck (m ≤k ≤n) within the string Cm:n. 733 matches entries in certain lexica. We use the online dictionary from Peking University as the formal lexicon and the compiled informal word list from our training instances as the informal lexicon. In addition, we employ additional online word lists3 to distinguish named entities and function words from potential informal words. As shown in Table 1, alphabetic sequences in microblogs may refer to Chinese Pinyin or Pinyin abbreviations, rather than English (e.g., “bs” for bi shi; “to despise”). Hence, we added dictionarybased features to indicate the presence of Pinyin initials, finals and standard Pinyin expansions, using a UK English word list4. The final list of dictionary-based features employed are: • If Ck (i −4 < k < i + 4) is a surname: Surname@k • (*) If Ck (i −4 < k < i + 4) is a stop word: StopW@k • (*) If Ck (i−4 < k < i+4) is a noun-suffix: NSuffix@k • (*) If Ck (i −4 < k < i + 4) is a Pinyin Initial: Initial@k • (*) If Ck (i−4 < k < i+4) is a Pinyin Final: Final@k • If Ck (i −4 < k < i + 4) is a English letter: En@k • If Cm:n (i−4 < m < n < i+4, 0 < n−m < 5) matches one entry in the Peking University dictionary: FW@m:n; len@offset; FW-Ck@b-offset, FW-Ck@n-offset or FW-Ck@e-offset • (*) If Cm:n (i −4 < m < n < i + 4, 0 < n −m < 5) matches one entry in the informal word list: IFW@m:n; len@offset; IFW-Ck@b-offset, IFW-Ck@n-offset or IFW-Ck@e-offset • (*) If Cm:n (i −4 < m < n < i + 4, 0 < n −m < 5) matches one entry in the valid Pinyin list: PY@m:n; len@offset; PY-Ck@b-offset, PYCk@n-offset or PY-Ck@e-offset Statistical Features. We use pointwise mutual information (PMI) variant (Church and Hanks, 3Resources are available at http://www.sogou. com/labs/resources.html 4http://www.bckelk.uklinux.net/menu. html 1990) to account for global, corpus-wide information. This measures the difference between the observed probability of an event (i.e., several characters combined as an informal word) and its expectation, based on the probabilities of the individual events (i.e., the probability of the individual characters occurring in the corpus). Compared with other standard association measures such as MI, PMI tends to assign rare events higher scores. This makes it a useful signal for IWR, as it is sensitive to informal words which often have low frequency. However, the word frequency alone is not reliable enough to distinguish informal words from uncommon but formal words. In response to these difficulties in differentiating linguistic registers, we compute two different PMI scores for character-based bigrams from two large corpora representing news and microblogs as features. We also use the difference between the two PMI scores as a differential feature. In addition, we also convert all the character-based bigrams into Pinyin-based bigrams (ignoring tones5) and compute the Pinyin-level PMI in the same way. These features capture inconsistent use of the bigram across the two domains, which assists to distinguish informal words. Note that we eschew smoothing in our computation of PMI, as it is important to capture the inconsistent character bigrams usage between the two domains. For example, the word “rp” appears in the microblog domain, but not in news. If smoothing is conducted, the character bigram “rp” will be given a non-zero probability in both domains, not reflective of actual use. For each character Ci, we incorporate the PMI of the character bigrams as follows: • (*) If CkCk+1 (i −4 < k < i + 4) is not a Chinese word recorded in dictionaries: CPMI-N@k+i; CPMI-M@k+i; CDiff@k+i; PYPMI-N@k+i; PYPMI-M@k+i; PYDiff@k+i 3 Experiment We discuss the dataset, baseline systems and experiments results in detail in the following. 3.1 Data Preparation We utilize the Chinese social media archive, PrEV (Cui et al., 2012), to obtain Chinese mi5The informal word may have the same Pinyin transcription as its formal counterpart without considering the differences in tones. 734 croblog posts from the public timeline of Sina Weibo6. Sina Weibo is the largest microblogging in China, where over 100 million Chinese microblog posts are posted daily (Cao, 2012), likely the largest public source of informal and daily Chinese language use. Our dataset has a total of 6,678,021 messages, covering two months from June to July of 2011. To annotate the corpus, we employ Zhubajie7, one of China mainland’s largest crowdsourcing (Wang et al., 2010) platforms to obtain informal word annotations. In total, we spent US$110 on assembling a subset of 5, 500 posts (12, 446 sentences) in which 1, 658 unique informal words are annotated within five weeks via Zhubajie. Each post was annotated by three annotators with moderate (0.57) inter-annotator agreement measured by Fleiss’ κ (Joseph, 1971), and conflicts were resolved by majority voting. We divided the annotated corpus, taking 4, 000 posts for training, and the remainder (1, 500) for testing. Through inspection, we note that 79.8% of the informal words annotated in the testing set are not covered by the training set. We also follow Wang et al. (2012)’s conventions and apply rulesets to preprocess the corpus’ URLs, emoticons, “@usernames” and Hashtags as pre-segmented words, before input to CWS and IWR. For the CWS task, the first author manually labelled the same corpus following the segmentation guidelines published with the SIGHAN-58 MSR dataset. 3.2 Baseline Systems We implemented several baseline systems to compare with proposed FCRF joint inference method. Existing Systems. We re-implemented Xia and Wong (2008)’s extended Support Vector Machine (SVM) based microtext IWR system to compare with our method. Their system only does IWR, using the CWS and POS tagging otuput of the ICTCLAS segmenter (Zhang et al., 2003) as input. To compare our joint inference versus other learning models, we also employed a decision tree (DT) learner, equipped with the same feature set as our FCRF. Both the SVM and DT models are provided by the Weka3 (Hall et al., 2009) toolkit, using its default configuration. To evaluate CWS performance, we compare with two recent segmenters. Sun and Xu (2011)’s 6http://open.weibo.com 7http://www.zhubajie.com 8http://www.sighan.org work achieves state-of-the-art performance and is publicly available. They employ a LCRF taking as input both lexical and statistical features derived from unlabeled data. As a second baseline, we also evaluate against a widelyused, commercially-available alternative, the recently released 2011 ICTCLAS segmenter9. Two-stage Sequential Systems. To benchmark the improvement that the factorial CRF model has by doing the two tasks jointly, we compare with a LCRF solution that chains these two tasks together. For completeness, we test pipelining in both directions – CWS feeding features for IWR (LCRFcws≻LCRFiwr), and the reverse (LCRFiwr≻LCRFcws). We modify the opensource Mallet GRMM package (Sutton, 2006) to implement both this sequential LCRF model and our proposed FCRF model. Both models take the whole feature set described in Section 2.3. Upper Bound Systems. To measure the upperbound achievable with perfect support from the complementary task, we also provided gold standard labels of one task (e.g., IWR) as an input feature to the other task (e.g., CWS). These systems (hereafter denoted as LCRF≻LCRF-UB and FCRF-UB) are meant for reference only, as they have access to answers for the opposing tasks. Adapted SVM for Joint Classification. For completeness, we also compared our work against the standard SVM classification model that performs both tasks by predicting the cross-product of the CWS and IWR individual classes (in total, 8 classes). We train the SVM classifier on the same set of features as the FCRF, by providing the cross-product of two layer labels as gold labels. This system (hereafter denoted as SVM-JC) was implemented using the LibSVM package (Chang and Lin, 2011). 3.3 Evaluation Metrics We use the standard metrics of precision, recall and F1 for the IWR task. Only words that exactly match the manually-annotated labels are considered correct. For example given the sentence “ HË Ë ËH H H}b” (“HÙ Ù ÙH H H}b”; “How delicious it is”), if the IWR component identifies “Ë H” as an informal word, it will be considered correct, whereas both “ËH}” and “Ë” are deemed incorrect. For CWS evaluation, we employ the conventional scoring script provided in SIGHAN9http://www.ictclas.org/index.html 735 5, which also provides out-of-vocabulary recall (OOVR). To determine statistical significance of the improvements, we also compute paired, one-tailed t tests. As pointed out by Yeh and Alexander (2000), the randomization method is more reliable in measuring the significance of F1 through handling non-linear functions of random variables. Thus we employ Pad´o (2006)’s implementation of randomization algorithm to measure the significance of F1. 3.4 Experimental Results The goal of our experiments is to answer the following research questions: RQ1 Do the two tasks of CWS and IWR benefit from each other? RQ2 Is jointly modeling both tasks more efficient than conducting each task separately or sequentially? RQ3 What is the upper bound improvement that can be achieved with perfect support from the opposing task? RQ4 Are the features we designed for the joint inference method effective? RQ5 Is there a significant difference between the performance of the joint inference of a crossproduct SVM and our proposed FCRF? 3.4.1 CWS Performance Table 2: Performance comparison on the CWS task. The two bottom-most rows show upper bound performance. ‘‡’(‘∗’) in the top four lines indicates statistical significance at p < 0.001 (0.05) when compared with the previous row. Symbols in the bottom two lines indicate significant difference between upper bound systems and their corresponding counterparts. Pre Rec F1 OOVR ICTCLAS (2003) 0.640 0.767 0.698 0.551 Sun and Xu (2011) 0.661‡ 0.691‡ 0.675 0.572‡ LCRFiwr≻LCRFcws 0.741‡ 0.775‡ 0.758∗0.607∗ FCRF 0.757‡ 0.801‡ 0.778∗0.633∗ LCRFiwr≻LCRFcws-UB 0.807‡ 0.815‡ 0.811∗0.731‡ FCRF-UB 0.820‡ 0.833‡ 0.826∗0.758‡ In general, our FCRF yields the best performance among all systems (top portion of Table 2), answering RQ1. Given microblog posts as test data, the F1 of ICTCLAS drops from 0.98510 to 0.698, clearly showing the difficulty of processing microtext. The sequential LCRF model and FCRF model both outperform the baselines, which means with the novel features shared by the two tasks, CWS benefits significantly from the results of IWR. Hence our segmenter outperforms the existing segmenters by tackling one of the bottlenecks of recognizing informal words in Chinese microtext. To illustrate, the sequence “... ( ( ( º...” (“... ¡ ¡ ¡ º...”; “...is there anyone...”), is correctly labeled as BIES by our FCRF model but mislabeled by baseline systems as SSBE. This is likely due to the ignorance of the informal word “ ( ”, leading baseline systems to keep the formal word “ º” (“someone”) as a segment. More importantly, by jointly optimizing the probabilities of labels on both layers, the FCRF model slightly but significantly improves over the sequential LCRF method, answering RQ2. Thus we conclude that jointly modeling both tasks is more effective than performing the tasks sequentially. For RQ3, the last two rows presents the upperbound systems that have access to gold standard labels for IWR. Both upper-bound systems statistically outperform their counterparts, indicating that there is still room to improve CWS performance with better IWR as input. This also validates our assumption that CWS can benefit from joint consideration of IWR. Taking the best previous work as our lower bound (0.69 F1), we see that our FCRF methodology (0.77) makes significant progress towards the upper bound (0.82). 3.4.2 IWR Performance For RQ1 and RQ2, Table 3 compares the performance of our method with the baseline systems on the IWR task. Overall, the FCRF method again outperforms all the baseline systems. We note that the CRF based models achieve much higher precision score than baseline systems, which means that the CRF based models can make accurate predictions without enlarging the scope of prospective informal words. Compared with the CRF based models, the SVM and DT both over-predict informal words, incurring a larger precision penalty. Studying this phe10Self-declared segmentation accuracy on formal text.http://www.ictclas.org/ 736 Table 3: Performance comparison on the IWR task. ‘‡’ or ‘∗’ in the top four rows indicates statistical significance at p < 0.001 or < 0.05 compared with the previous row. Symbols in the bottom two rows indicate differences between upper bound systems and their counterparts. Pre Rec F1 SVM 0.382 0.621 0.473 DT 0.402∗ 0.714∗ 0.514∗ LCRFcws≻LCRFiwr 0.858‡ 0.591‡ 0.699∗ FCRF 0.877∗ 0.655∗ 0.750∗ LCRFcws≻LCRFiwr-UB 0.840 0.726∗ 0.779∗ FCRF-UB 0.878 0.752∗ 0.810∗ nomenon more closely, we find it is difficult for the baseline systems to classify segments mixed with formal and informal characters. Taking the microblog “HË Ë ËH H H}b” (“HÙ Ù ÙH H H} b”; “how delicious it is”) as an example, without considering the possible word boundaries suggested by the contextual formal words – i.e., “ H” (“how”) and “}” (“delicious”) – the baselines chunk the informal words (i.e., “ËH”) together with adjacent characters mistakenly as “ Ë H}” or, “HËH”. As indicated by the bold figures in Table 3, the FCRF performs slightly better than the sequential LCRF (p < 0.05) – a weaker trend when compared with the CWS case. As an example, the sequential LCRF method fails to recognize “1¯” (“iPhone”) as an informal word in the sentence “„1 1 1¯ ¯ ¯}©” (“my iPhone is fun”), where the FCRF succeeds. Inspecting the output, the LCRF segmenter mislabels “1¯” as SS. By jointly considering the probabilities of the two layers, the FCRF model infers better quality segmentation labels, which in turn enhances the FCRF’s capability to recognize the sequence of two characters as an informal word. This is further validated by the significant performance gulf between the upper bound and the basic system shown in the lower half of the table. For RQ3, interestingly, the difference in performance between the LCRF and FCRF upper-bound systems is not significant. However, these are upper bounds, and we expect on real-world data that CWS performance will not be perfect. As such, we still recommend using the FCRF model, as the joint process is more robust to noisy input from one channel. Table 4: F1 comparison between FCRF and FCRF−new. (‘∗’) indicates statistical significance at p < 0.05 when compared with the previous row. CWS IWR FCRF−new 0.690 0.552 FCRF 0.778∗ 0.750∗ 3.4.3 Feature set evaluation For RQ4, to evaluate the effectiveness of our newly-introduced feature sets (those marked with “*” in Section 2.3), we also test a FCRF (FCRF−new) without our new features. According to Table 4, performance drops by a significant amount: 0.088 F1 on CWS and 0.198 F1 on IWR. FCRF−new makes many mistakes identical to the baselines: segmenting informal words into several single-character words and chunking adjacent characters from informal and formal words together. 3.4.4 Adapted SVM-JC vs. FCRF Table 5: F1 comparison between SVM, SVM-JC and FCRF. ‘‡’(‘∗’) indicates statistical significance at p < 0.001 (0.05) when compared with the previous row. CWS IWR SVM — 0.473 SVM-JC 0.741 0.624‡ FCRF 0.778∗ 0.750∗ For RQ5, according to Table 5, our SVM trained to predict the cross-product CWS/IWR classification (SVM-JC) performs quite well on its own. Unsurprisingly, it does not outperform our proposed FCRF, which has access to more structural correlation among the CWS and IWR labels. SVM-JC significantly (p < 0.001) outperforms the baseline SVM system by 0.151 in the IWR task, which we think is partially explained by its good performance (0.761) on the CWS task. The over-prediction tendency of the individual SVM is largely solved by simultaneously modeling the CWS task, whereas FCRF turns out to be more effective in solving joint inference problem, although in a weaker trend in terms of the statistical significance (p < 0.05). We conclude that the use of the FCRF model and the addition of our new features are both essential for the high performance of our system. 737 4 Discussion We wish to understand the causes of errors in our models so that we may better understand its weaknesses. Manually inspecting the errors of our system, we found three major categories of errors which we dissect here. For IWR, the major source of error, accounting for more than 60% of all errors, is caused by what we term the partially observed informal word phenomenon. This refers to informal words containing multiple characters, where some of its components have appeared in the training data as informal words individually. For instance, the singlecharacter informal word, “à” (“ˆ”; “very”) appears in training multiple times, thus the unseen informal word “àE” (“ˆE”; “long time”) is a partially observed informal word. In this case, the model incorrectly labels the known, single character “à” with IF S as an informal word, instead of labeling the unseen sequence “àE” with correct labels IF B IF E. Errors then result in both tasks. This observation motivates the use of the relation between the known informal word and its formal counterpart in order to inform the model to better predict in cases of partial observations. Following the same example, given that “à” is an informal word, if the model also considers the probability of normalizing “à” to “ˆ”, while considering the higher probability that the character sequence “ˆE” could be a formal word, there would be a higher likelihood of correctly predicting the sequence “àE” as an informal word. So informal word normalization is also an intrinsic component of IWR and CWS, and we believe it is an interesting direction for future work. Another source of error is a side effect of microtext being extremely short. For example, in the sentence “¥ ¥ ¥¶ ¶ ¶*/†” (“Þ Þ Þ¶ ¶ ¶* /†”; “Go home! Exhausted.”), the unseen informal word “¥¶” itself forms a short sentence. Although it has a subsequent sentence “*/†” (“Exhausted”) as context, and the two are pragmatically related, (i.e., “I am exhausted! [And as a result,] I want to go home.”), the lexical relationship between the sentences is weak; i.e., “*/†” appears frequently as the context of various sentences, making the context difficult to utilize. These phenomena makes it difficult to recognize “¥¶” as an informal word. A possible solution could factor in proximity, similar to density-based matching, as in Tellex et Table 6: Sample Chinese freestyle named entities that are usernames. Freestyle Named Entity Explanation “´²êš” “´²” (“durian”), “ê” (“snow”), “š” (“charming lady”) “ É•” It is short for the cartoon name “w õ••”. “dj‡e”, “•pp” Usernames mixed of Chinese and alphabetic characters al. (2003). We can assign a higher weight to features related to characters closer to the current target character. In particular, for this example, given the current target character “ ¥”, we can assign higher weight to features generated from features from the proximal context “¥¶”, and lower weight to features extracted from distal contexts. Another major group of errors come from what we term freestyle named entities as exemplified in Table 6; i.e., person names in the form of user IDs and nicknames, that have less constraint on form in terms of length, canonical structure (not surnames with given names; as is standard in Chinese names) and may mix alphabetic characters. Most of these belong to the category of Person Name (PER), as defined in CoNLL-200311 Named Entity Recognition shared task. Such freestyle entities are often misrecognized as informal words, as they share some of the same stylistic markings, and are not marked by features used to recognize previous Chinese named entity recognition methods (Gao et al., 2005; Zhao and Kit, 2008) that work on news or general domain text. We recognize this as a challenge in Chinese microtext, but beyond the scope of our current work. 5 Related Work In English, IWR has typically been investigated alongside normalization. Several recent works (Han and Baldwin, 2011; Gouws et al., 2011; Han et al., 2012) aim to produce informal/formal word lexicons and mappings. These works are based on distributional similarity and string similarity that address concerns of lexical variation and spelling. These methods propose two-step unsupervised approaches to first detect and then normalize detected informal words using dictionaries. In processing Chinese informal language, work conducted by Xia and Wong address the problem 11http://www.cnts.ua.ac.be/conll2003/ ner/ 738 of in bulletin board system (BBS) chats. They employ pattern matching and SVM-based classification to recognize Chinese informal sentences (not individual words) chat (Xia et al., 2005). Both methods had their advantages: the learning-based method did better on recall, while the pattern matching performed better on precision. To obtain consistent performance on new unseen data, they further employed an error-driven method which performed more consistently over time-varying data (Xia and Wong, 2006). In contrast, our work identifies individual informal words, a finergrained (and more difficult) task. While seminal, we feel that the difference in scope (informal sentence detection rather than word detection) shows the limitation of their work for microblog IWR. Their chats cover only 651 unique informal words, as opposed to our study covering almost triple the word types (1, 658). Our corpus demonstrates a higher ratio of informal word use (a new informal word appears in 1,658 12,446 = 13% of sentences, as opposed to 651 22,400 = 2% in their BBS corpus). Further analysis of their corpus reveals that phonetic substitution is the primary origin of informal words in their corpus – 99.2% as reported in (Wong and Xia, 2008). In contrast, the origin for informal words in microblogs is more varied, where phonetic substitutions abbreviations and neologisms, account for 53.1%, 21.4% and 18.7% of the informal word types, respectively. Their method is best suited for phonetic substitution, thus performing well on their corpus but poorly on ours. More closely related, Li and Yarowsky (2008) tackle Chinese IWR. They bootstrap 500 informal/formal word pairs by using manually-tuned queries to find definition sentences on the Web. The resulting noisy list is further re-ranked based on n-gram co-occurrence. However, their method makes a basic assumption that informal/formal word pairs co-occur within a definition sentence (i.e., “<informal word> means <formal word>”) may not hold in microblog data, as microbloggers largely do not define the words they use. Closely related to our work is the task of Chinese new word detection, normally treated as a separate process from word segmentation in most previous works (Chen and Bai, 1998; Wu and Jiang, 2000; Chen and Ma, 2002; Gao et al., 2005). Aiming to improve both tasks, work by Peng et al. (2004) and Sun et al. (2012) conduct segmentation and detection sequentially, but in an iterative manner rather than joint. This is a weakness as their linear CRF model requires re-training. Their method also requires thresholds to be set through heuristic tuning, as to whether the segmented words are indeed new words. We note that the task of new word detection refers to out-of-vocabulary (OOV) detection, and is distinctly different from IWR (new words could be both formal or informal words). 6 Conclusion There is a close dependency between Chinese word segmentation (CWS) and informal word recognition (IWR). To leverage this, we employ a factorial conditional random field to perform both tasks of CWS and IWR jointly. We propose novel features including statistical and lexical features that improve the performance of the inference process. We evaluate our method on a manually-constructed data set and compare it with multiple research and industrial baselines that perform CWS and IWR individually or sequentially. Our experimental results show our joint inference model yields significantly better F1 for both tasks. For analysis, we also construct upper bound systems to assess the potential maximal improvement, by feeding one task with the gold standard labels from the complementary task. These experiments further verify the necessity and effectiveness of modeling the two tasks jointly, and point to the possibility of even better performance with improved per-task performance. Analyzing the classes of errors made by our system, we identify a promising future work topic to handle errors arising from partially observed informal words – where parts of a multi-character informal word have been observed before. We believe incorporating informal word normalization into the inference process may help address this important source of error. Acknowledgments We would like to thank the anonymous reviewers for their valuable comments. We also appreciate the proofreading effort made by Tao Chen, Xiangnan He, Ning Fang, Yushi Wang and Haochen Zhan from WING. This work also benefits from the discussion with Yang Liu, associate professor from Tsinghua University. 739 References Belinda Cao. 2012. Sina’s weibo outlook buoys internet stock gains: China overnight. Chih-Chung Chang and Chih-Jen Lin. 2011. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology, pages 27:1–27:27. Keh-Jiann Chen and Ming-Hong Bai. 1998. Unknown Word Detection for Chinese by a CorpusBased Learning Method. International Journal of Computational Linguistics and Chinese Language Processing, pages 27–44. Keh-Jiann Chen and Wei-Yun Ma. 2002. Unknown Word Extraction for Chinese Documents. In Proceedings of the 19th international conference on Computational linguistics, pages 1–7. Kenneth Ward Church and Patrick Hanks. 1990. Word Association Norms, Mutual Information, and Lexicography. Computional Linguistic, pages 22–29. Anqi Cui, Liner Yang, Dejun Hou, Min-Yen Kan, Yiqun Liu, Min Zhang, and Shaoping Ma. 2012. PrEV: Preservation Explorer and Vault for Web 2.0 User-Generated Content. Theory and Practice of Digital Libraries, pages 101–112. Jianfeng Gao, Mu Li, Andi Wu, and Chang-Ning Huang. 2005. Chinese Word Segmentation and Named Entity Recognition: A Pragmatic Approach. Computitional Linguistic, pages 531–574. Stephan Gouws, Donald Metzler, Congxing Cai, and Eduard Hovy. 2011. Contextual Bearing on Linguistic Variation in Social Media. In Proceedings of the Workshop on Language in Social Media, pages 20–29. Zhao Hai, Huang Chang-Ning, Li Mu, and Lu BaoLiang. 2006. Effective Tag Set Selection in Chinese Word Segmentation via Conditional Random Field Modeling. The 20th Pacific Asia Conference on Language, Information and Computation, pages pp.87–94. Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer Pfahringer, Peter Reutemann, and Ian H. Witten. 2009. The WEKA Data Mining Software: An Update. ACM Special Interest Groups on Knowledge Discovery and Data Mining Explorations Newsletter, pages 10–18. Bo Han and Timothy Baldwin. 2011. Lexical Normalisation of Short Text Messages: Makn Sens a #twitter. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 368–378. Bo Han, Paul Cook, and Timothy Baldwin. 2012. Automatically Constructing a Normalisation Dictionary for Microblogs. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 421–432. Finn V Jensen. 1996. An Introduction to Bayesian Networks, volume 74. Fleiss L Joseph. 1971. Measuring nominal scale agreement among many raters. Psychological bulletin, pages 378–382. Catherine Kobus, Franc¸ois Yvon, and G´eraldine Damnati. 2008. Normalizing SMS: Are Two Metaphors Better Than One? In International Conference on Computational Linguistics, pages 441– 448. John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data. In Proceedings of the Eighteenth International Conference on Machine Learning, pages 282–289. Zhifei Li and David Yarowsky. 2008. Mining and Modeling Relations between Formal and Informal Chinese Phrases from Web Corpora. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1031–1040. Jin Kiat Low, Hwee Tou Ng, and Wenyuan Guo. 2005. A Maximum Entropy Approach to Chinese Word Segmentation. Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing. Sebastian Pad´o, 2006. User’s Guide to SIGF: Significance Testing by Approximate Randomisation. Fuchun Peng, Fangfang Feng, and Andrew McCallum. 2004. Chinese Segmentation and New Word Detection Using Conditional Random Fields. In Proceedings of the 20th international conference on Computational Linguistics, page 562. Weiwei Sun and Jia Xu. 2011. Enhancing Chinese Word Segmentation Using Unlabeled Data. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 970–979. Xu Sun, Houfeng Wang, and Wenjie Li. 2012. Fast Online Training with Frequency-Adaptive Learning Rates for Chinese Word Segmentation and New Word Detection. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, pages 253–262. Charles Sutton, Andrew McCallum, and Khashayar Rohanimanesh. 2007. Dynamic Conditional Random Fields: Factorized Probabilistic Models for Labeling and Segmenting Sequence Data. Journal of Machine Learning Research, pages 693–723. Charles Sutton. 2006. GRMM: GRaphical Models in Mallet. In URL http://mallet. cs. umass. edu/grmm. 740 Stephanie Tellex, Boris Katz, Jimmy Lin, Aaron Fernandes, and Gregory Marton. 2003. Quantitative evaluation of passage retrieval algorithms for question answering. In Proceedings of the 26th annual international ACM SIGIR conference on Research and Development in Information Retrieval, pages 41–47. Aobo. Wang, Cong Duy Vu Hoang, and Min-Yen Kan. 2010. Perspectives on Crowdsourcing Annotations for Natural Language Processing, journal = Language Resources and Evaluation. pages 1–23. Aobo Wang, Tao Chen, and Min-Yen Kan. 2012. Retweeting From A Linguistic Perspective. In Proceedings of the Second Workshop on Language in Social Media, pages 46–55. Kam-Fai Wong and Yunqing Xia. 2008. Normalization of Chinese Chat Language. Language Resources and Evaluation, pages 219–242. Andi Wu and Zixin Jiang. 2000. StatisticallyEnhanced New Word Identification in A Rule-based Chinese Aystem. In Proceedings of the second workshop on Chinese Language Processing, pages 46–51. Yunqing Xia and Kam-Fai Wong. 2006. Anomaly Detecting within Dynamic Chinese Chat Text. NEW TEXT Wikis and blogs and other dynamic text sources, page 48. Yunqing Xia, Kam-Fai Wong, and Wei Gao. 2005. NIL Is Not Nothing: Recognition of Chinese Network Informal Language Expressions. In 4th SIGHAN Workshop on Chinese Language Processing, volume 5. Nianwen Xue. 2003. Chinese Word Segmentation as Character Tagging. Computational Linguistics and Chinese Language Processing, pages 29–48. Alexander Yeh. 2000. More accurate tests for the statistical significance of result differences. In Proceedings of the 18th conference on Computational linguistics - Volume 2, pages 947–953. Hua-Ping Zhang, Hong-Kui Yu, De-Yi Xiong, and Qun Liu. 2003. HHMM-based Chinese Lexical Analyzer ICTCLAS. In Proceedings of the second SIGHAN workshop on Chinese language processing, pages 184–187. Hai Zhao and Chunyu Kit. 2008. Unsupervised Segmentation Helps Supervised Learning of Character Tagging for Word Segmentation and Named Entity Recognition. In Proceedings of the Sixth SIGHAN Workshop on Chinese Language Processing, pages 106–111. 741
2013
72
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 742–751, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Generating Synthetic Comparable Questions for News Articles Oleg Rokhlenko Yahoo! Research Haifa 31905, Israel [email protected] Idan Szpektor Yahoo! Research Haifa 31905, Israel [email protected] Abstract We introduce the novel task of automatically generating questions that are relevant to a text but do not appear in it. One motivating example of its application is for increasing user engagement around news articles by suggesting relevant comparable questions, such as “is Beyonce a better singer than Madonna?”, for the user to answer. We present the first algorithm for the task, which consists of: (a) offline construction of a comparable question template database; (b) ranking of relevant templates to a given article; and (c) instantiation of templates only with entities in the article whose comparison under the template’s relation makes sense. We tested the suggestions generated by our algorithm via a Mechanical Turk experiment, which showed a significant improvement over the strongest baseline of more than 45% in all metrics. 1 Introduction For companies whose revenues are mainly adbased, e.g. Facebook, Google and Yahoo, increasing user engagement is an important goal, leading to more time spent on site and consequently to increased exposure to ads. Examples for typical engaging content include other articles for the user to read, updates from the user’s social neighborhood and votes or comments on videos, blogs etc. In this paper we propose a new way to increase user engagement around news articles, namely suggesting questions for the user to answer, which are related to the viewed article. Our motivation is that there are questions that are “irresistible” because they are fun, involve emotional reaction and expect simple answers. These are comparative questions, such as “is Beyonce a better singer than Madonna?”, “who is better looking, Brad Pitt or George Clooney?”, “who is faster: Superman or Flash?” and “which camera brand do you prefer: Canon or Nikon?” Furthermore, such questions are social in nature since users would be interested in reading the opinions of other users, similar to viewing other comments (Schuth et al., 2007). Hence, a user that provided an answer may return to view other answers, further increasing her engagement with the site. One approach for generating comparable questions would be to employ traditional question generation, which syntactically transform assertions in a given text into questions (Mitkov et al., 2006; Heilman and Smith, 2010; Rus et al., 2010). Sadly, fun and engaging comparative questions are typically not found within the text of news articles. A different approach would be to find concrete relevant questions within external collections of manually generated comparable questions. Such collections include Community-based Question Answering (CQA) sites such as Yahoo! Answers and Baidu Zhidao and sites that are specialized in polls, such as Toluna. However, it is highly unlikely that such sources will contain enough relevant questions for any news article due to typical sparseness issues as well as differences in interests between askers in CQA sites and news reporters. To better address the motivating application above, we propose the novel task of automatically suggesting comparative questions that are relevant to a given input news article but do not appear in it. To achieve broad coverage for our task, we present an algorithm that generates synthetic concrete questions from question templates, such as “Who is a better actor: #1 or #2?”. Our algorithm consists of two parts. An offline part constructs a database of comparative question templates that appear in a large question corpus. For a given news article, an online part chooses relevant tem742 Figure 1: An example news article from OMG! plates for the article by matching between the article content and typical template contexts. The algorithm then instantiates each relevant template with two entities that appear in the article. Yet, for a given template, only some of the entities are plausible slot fillers. For example, ‘Madonna’ is not a reasonable filler for “Who is a better dad, #1 or #2?”. Thus, our algorithm employs entity filtering to exclude candidate instantiations that do not make sense. To test the performance of our algorithm, we conducted a Mechanical Turk experiment that assessed the quality of suggested questions for news articles on celebrities. We compared our algorithm to a random baseline and to a partial version of our algorithm that includes a template relevance component but lacks filtering of candidate instantiations. The results show that the full algorithm provided 45% more correct instantiations, but surprisingly also 46% more relevant suggestions compared to the stronger baseline. These results point at the importance of both picking relevant templates and smart instantiation selection to the quality of generated questions. In addition, they indicate that user perception of relevance is affected by the correctness of the question. 2 Motivation and Algorithmic Overview Before we detail our algorithm, we provide some motivations and insights to the design choices we took in our algorithm, which also indicate the difficulties inherent in the task. 2.1 Motivation Given a news article, our algorithm generates a set of comparable questions for the article from question templates, e.g. “who is faster #1 or #2?”. Though the template words typically do not appear in the article, they need to be relevant to it’s content, that is they should correspond to one of the main themes in the article or to one of the pubFigure 2: A high-level overview of the comparable question generation algorithm. The offline part is colored dark grey and the online part is colored light blue. lic interests of the compared entities. For example, “who is a better dad #1 or #2?” is relevant to the article in Figure 1, while “who is faster #1 or #2?” is not relevant. Therefore, we need to model the typical contents to which each template is relevant. Looking at the structure of comparable questions, we observed that a specific comparable relation, such as ‘better dad’ and ‘faster’, can usually be combined with named entities in several syntactic ways to construct a concrete question. We encode this information in generic comparable templates, e.g. “who is a RE: #1 or #2?” and “is #1 a RE than #2?”, where RE is a slot for a comparable relation and #1 and #2 are slots for entities. Using the above generic templates, ‘Jet Li’ and ‘Jackie Chan’ can be combined with the comparable relation ‘better fighter’ to generate “who is a better fighter: Jackie Chan or Jet Li?” and “is Jackie Chan a better fighter than Jet Li?” respectively. Following, our algorithm separately maintains comparable relations and generic templates. In this paper we constrain ourselves to generate comparable questions between entities that appear in the article. Yet, not all entities can be compared to each other under a specific template, adding substantial complexity to the generation of questions. Looking at Figure 1, the generated question ‘who is faster, Angelina Jolie or David Beckham?’ makes sense with respect to David Beckham, but not with respect to Angelina Jolie, since the typical reader is rarely interested in her running skills. Our algorithm thus needs to assess whether an instantiation is correct, that is whether the comparison between the two entities makes sense under the specific template. 743 Further delving into question correctness, the above example shows the need to assess each entity by itself. However, even if both entities are independently valid for the template, their comparison may not make sense. For example, “who is better looking: Will Smith or Angelina Jolie?” doesn’t feel right, even though each entity by itself fits the template. This is because when comparing looks, we expect a same sex comparison. 2.2 Algorithmic Overview The above observations led us to the design of the automatic generation algorithm depicted in Figure 2. The algorithm’s offline part constructs, from a large collection of questions, a database of comparable relations, together with their typical contexts. It also extracts generic templates and the mapping to the relations that may instantiate them. From this database, we learn: (a) a context profile per template for relevance matching; (b) a single entity model per template slot that identify valid instantiations; and (c) an entity pair model that detects pairs of entities that can be compared together under the template. In the online part, these three models are applied to rank relevant templates for a given article and to generate only correct questions with respect to template instantiation. The next two sections detail the template extraction component and the model training and application component in our algorithm. 3 Comparable Question Mining To suggest comparable questions our algorithm needs a database of question templates. As discussed previously, a good source for mining such templates are CQA sites. Specifically, in this study we utilize all questions submitted to Yahoo! Answers in 2011 as our corpus. We next describe how comparable relations and generic comparable templates are extracted from this corpus. 3.1 Comparable Relation Extraction An important observation for the task of comparable relation extraction is that many relations are complex multiword expressions, and thus their automatic detection is not trivial. Examples for such relations are marked in the questions “Who is the best rapper alive, Eminem or Jay-z?” and “Who is the most beautiful woman in the world, Adriana Lima or Jessica Alba?”. Therefore, we decided to employ a Conditional Random Fields (CRF) tagger (Lafferty et al., 2001) to the task, since CRF was shown to be state-of-the-art for sequential relation extraction (Mooney and Bunescu, 2005; Culotta et al., 2006; Jindal and Liu, 2006). As a pre-processing step for detecting comparable relations, our extraction algorithm identifies all the named entities of interest in our corpus, keeping only questions that contain at least two entities. In each of remaining questions, we then substitute the entity names with the variable slots #i in the order of their appearance. For example, “Nnamdi Asomugha vs. Darrelle Revis? Who is the better cornerback?” turned into “#1 vs. #2? Who is the better cornerback?”. This transformation helps us to design a simpler CRF than that of (Jindal and Liu, 2006), since our CRF utilizes the known positions of the target entities in the text. To train the CRF model, the authors manually tagged all comparable relation words in approximately 300 transformed questions in the filtered corpus. The local and global features for the CRF, which we induce from each question word, are specified in Figures 3 and 4 respectively. Though there are many questions in Yahoo! Answers containing two named entities, e.g. “Is #1 dating #2?”, our CRF tagger is trained to detect only comparable relations like “Who is prettier #1 or #2?”. This is due to the labeled training set, which contains only this kind of relations, and to our features, which capture aspects of this specific linguistic structure. The trained model was then applied to all other questions in the filtered corpus. This tagging process resulted in 60,000 identified question relation occurrences. From this output we constructed a database consisting of all occurring relations; each relation is accompanied by its supporting questions, those questions in which the relation occurrences were found. To achieve a highly accurate database, we filtered out relations with less than 50 supporting questions, ending with 295 relations in our database1. The authors conducted a manual evaluation of the CRF tagger performance, which showed 80% precision per occurrence. Yet, our filtering above of relations with low support left us with virtually 100% precision per relation and per occurrence. 1We intend to make this database publicly available under Yahoo! WebscopeTM (http://webscope. sandbox.yahoo.com). 2http://nlp.stanford.edu/software/ 744 (a) The word itself (b) Whether the word is capitalized (c) The word’s suffixes of length 1,2, and 3, which helps in detecting comparative adjectives that ends ‘est’ or ‘er’ (d) The word’s position in the sentence (e) The word’s Part of speech (POS) tag, based on the Stanford POS tagger2 (f) The words in a window of ±3 around the current one (g) The adjective before the word, if exists, which helps detecting comparative noun phrases, e.g. ‘better driver’ and ‘best singer’ (h) The shortest word distance between the word and one of the #i variables. (i) The shortest word distance of the word to one of the following connectives: ‘between’, ‘out’, ‘:’, ‘,’, ‘?’ Figure 3: CRF local features for each word (a) WH question type of the question, e.g. what, which, who, where (b) The average word distance between all #i variables in the question (c) The conjunction tokens appearing between the #i variables, such as or, vs, and Figure 4: CRF global features for each word 3.2 Comparable Template Extraction Our second mining task is to extract generic comparable templates that appear in our corpus, as well as identifying which comparable relation can instantiate which generic template. To this end, we replace each recognized relation sequence with a variable RE in the support questions annotated with #i variables. For example, “who is the best rapper alive, #1 or #2?” is transformed to “who is RE, #1 or #2?”. We next count the occurrences of each templatized question. While some questions contain many details besides the comparable generic template, others are simpler and contain only the generic template. Through this counting, frequently occurring generic templates are revealed, such as “is #1 a RE than #2?”. We retain only generic templates which appeared more than 50 times. Finally, for each comparable relation we mark as applicable only generic templates that occur at least once in the supporting questions of this relation. For example, the template “who is RE: #1 or #2?” was found applicable for ‘funnier’, and thus could be used to generate the concrete question “who is funnier: Jennifer Aniston or Courteney Cox?”. On average, each relation was associated with 3 generic templates. Algorithm 1 A high level overview of the online part of the question generation algorithm Input: A news article Output: A sorted list of comparable questions 1: Identify all target named entities (NEs) in the article 2: Infer the distribution of LDA topics for the article 3: For each comparable relation R in the database, compute its relevance score to be the similarity between the topic distributions of R and the article 4: Rank all the relations according to their relevance score and pick the top M as relevant 5: for each relevant relation R in the order of relevance ranking do 6: Filter out all the target NEs that do not pass the single entity classifier for R 7: Generate all possible NE pairs from the those that passed the single classifier 8: Filter out all the generated NE pairs that do not pass the entity pair classifier for R 9: Pick up the top N pairs with positive classification score to be qualified for generation 10: Instantiate R with each chosen NE pair via a randomly selected generic template 11: end for 4 Online Question Generation The online part of our automatic generation algorithm takes as input a news article and generates concrete comparable questions for it. Its high level description is presented in Algorithm 1. The algorithm starts with identifying the comparable relations in our database that are relevant to the article. For each relevant relation, we then generate concrete questions by picking generic templates that are applicable for this relation and instantiating them with pairs of named entities appearing in the article. Yet, as discussed before, only for some entity pairs the comparison under the specific relation makes sense, a quality which we refer to as instantiation correctness (see Section 2). To this end, we utilize two supervised models to filter incorrect instantiations. We next detail the two aspects of the online part: ranking relevant relations and correctly instantiating relations. 4.1 Ranking relevant relations To assess how relevant a given comparable relation is to an article, we model the relation’s typical context as a distribution over latent semantic topics. Specifically, we utilize Latent Dirichlet Allocation (LDA) (Blei et al., 2003) to infer latent topics in texts. To train an LDA model, we constructed for each comparable relation a pseudo-document consisting of all questions that contain this relation in our corpus (the supporting questions). We then 745 trained a model of 200 topics over these pseudodocuments, resulting in a model over a lexicon of 107,835 words. An additional product of the LDA training process is a topic distribution for each relation’s pseudo-document, which we consider as the relation’s context profile. We note that, unless otherwise specified, different model parameters were chosen based on a small held out collection of articles and questions, manually annotated by the authors. This collection was used to validate that the chosen parameter values indeed “make sense” for the task. Given a news article, a distribution over LDA topics is inferred from the article’s text using the trained model. Then, a cosine similarity between this distribution and the context profile of each comparable relation in our database is computed and taken as the relevance score for this relation. Finally, we rank all relations according to their relevance score and pick the top M as candidates for instantiation (M=3 in our experiment). 4.2 Correctly instantiating relations To generate useful questions from relevant comparable relations, we need to retain only correct instantiations of these relations. To this end, we utilize two complementing types of filters, one for each entity by itself, and one for pairs, since each filter considers different attributes of the entities at hand. For example, for the relation ‘is faster’, the single entity filter looks for athletes of all kinds, for whom this comparison is of interest to the reader. The pair filter, on the other hand, attempts to pass only same sex and same profession comparisons, e.g. male football players or female baseball players for this relation. We next describe the various features we extract for every entity and the supervised models that given this feature vector representation assess the correctness of an instantiation. 4.2.1 Entity Features We want to represent each entity as a vector of features that capture different aspects of entity characterization. To this end, we utilize two different broad-scale sources of information about named entities. The first is DBPedia3, which contains structured information on entries in Wikipedia, many of them are named entities that appear in news articles. The second source is the corpus of 3http://wiki.dbpedia.org/About CQA questions, which in our study was harvested from Yahoo! Answers (see Section 3). For named entities with a DBPedia entry, we extract all the DBPedia properties of classes subject and type as indicator features. Some example features for Brad Pitt include Actors from Oklahoma, AmericanAtheists, Artist and American film producers. One property that is currently missing from DBPedia is gender, a feature that was found to be very useful in our experiments. We automatically induce this feature from the Wikipedia abstract in each DBPedia entry. Specifically, we construct a histogram of male and female pronouns: he and his vs. she and her. The majority pronoun sex is then chosen to be the gender of the named entity, or none if the histogram is empty. One way to utilize the CQA question corpus could be to extract co-occurring words with each target entity as relevant contexts. Yet, since our questions come from Yahoo! Answers, we decided to use another attribute of the questions, the category to which the question is assigned, within a hierarchy of 1,669 categories (e.g. ‘Sports>Baseball’ and ‘Pets>Dogs’). For each named entity, we construct a histogram of the number of questions containing it that are assigned to each category. This histogram is normalized into a probability distribution with Laplace smoothing of 0.03, to incorporate the uncertainty that lies in named entities that appear only very few times. The categories and their probabilities are added as features, providing a high level representation of relevant contexts for the entity. 4.2.2 Single entity filtering We view the task of single entity filtering as a classification task. To this end, we trained a classifier per relation, constructing a different labeled training set for each relation. Positive examples are the entities that instantiate this relation in our CQA corpus. As negative examples, we take named entities that were never seen instantiating the relation in the corpus, but still occurred in some questions. We note that our named entity tagger could recognize more than 200,000 named entities, and most of them are negative for a given relation. For each relation we select negative examples by sampling uniformly from its negative entity list, assuming that the probability of hitting false negatives is low for such a long list. It is known that better classification performance is typically 746 achieved for a balanced training set (Provost, 2000). In our case, we over sample to help the classifier explore the large space of negative examples. Specifically, we sample 2,000 negative examples and duplicate the positive set to reach a similar number. We utilize the Support Vector Machines (SVM) implementation of LIBSVM (Chang and Lin, 2011) with a linear kernel as our classifier. The feature vector of each named entity was induced as described in Section 4.2.1. We split the labeled dataset into 70% training set and 30% validation set. Feature selection using information gain was performed on the training set to throw out nonsignificant features (Mitchell, 1997). The average accuracy of the single classifiers, measured over the validation sets, was 91%. 4.2.3 Entity pair filtering Similar to single entity filtering, we view the task of filtering entity pairs as a classification task, training a separate classifier for each relation. Entity pairs that instantiate the given relation in the question corpus are considered positive examples. Yet, the space of all the pairs that never instantiated the relation is huge, and the set of positive examples is relatively much smaller compared to the situation in the single entity classifier. In our study, uniform negative example sampling turned the training into a trivial task, preventing from the classifier to choose an useful discriminative boundary. Therefore, we generate negative examples by sampling only from pairs of named entities that both pass the single entity filter for this relation. The risk here is that we may sample false negative examples. Still, this sampling scheme enabled the classifier to identify better discriminative features. To generate features for a candidate pair, we take the two feature vectors of the two entities and induce families of pair features by comparing between the two vectors. Figure 5 describes the various features we generate. We utilize LIBSVM with an RBF kernel for this task, splitting the examples into 70% training set and 30% validation set. We over sampled the positive examples to reach up to 100 examples. The average accuracy of the pair classifiers on the validation set was 83%. For example, named entities that pass the single entity filtering for “be funny”, include Jay Leno, David Letterman (American TV hosts), Jim Carrey, and Steve Mar(a) All shared DBPedia indicator features in the two vectors: f DBP edia a ∩f DBP edia b , indicating them as shared, e.g. ‘FilmMaker s’ (b) All DBPedia features that appear only in one of the vectors, termed one-side features: f DBP edia a \ f DBP edia b and f DBP edia b \ f DBP edia a , indicating them as such, e.g. ‘FilmMaker o’ (c) Wikipedia categories that are ancestors of at least two one-side features that appear in the training set. For example, a common ancestor of ‘Spanish actors’ and ‘Russian actors’ is ‘European actors’. These features provide a high level perspective on one-side features (d) The Yahoo! Answers categories in which both named entities appear (e) Hellinger distance (Pollard, 2001) between the probability distributions over categories of the two entities (f) Three indicator gender features: whether both named entities are males, both are females or are different Figure 5: The entity pair features generated from two single entity feature vectors fa and fb tin (actors). The pair classifier assigned positive scores only to {Jay Leno, David Letterman} (TV hosts) and {Jim Carrey, Steve Martin} (actors) but not to other pairings of these entities. 5 Evaluation 5.1 Experimental Settings To evaluate our algorithm’s performance, we designed a Mechanical Turk (MTurk) experiment in which human annotators assess the quality of the questions that our algorithm generates for a sample of news articles. As the source of test articles, we chose the OMG! website4, which contains news articles on celebrities. Test articles were selected by first randomly sampling 5,000 news article from those that were posted on OMG! in 2011. We then filtered out articles that are longer than 4,000 characters, which were found to be tiresome for annotators to read, and those that are shorter than 300 characters, which consist mainly of video and photos. We were left with a pool of 1,016 articles from which we randomly sampled 100 as the test set. For each test article our algorithm obtained the top three relevant comparable relations, and for each relation selected the best instantiation (if exists). We used two baselines for performance comparison. The first random baseline chooses a relation randomly out of all possible relations in the database and then instantiates it with a random pair of entities that appear in the article. The second relevance baseline chooses the most relevant 4http://www.omg.com/ 747 Relevance Correctness Random baseline 29% 43% Relevance baseline 37% 53% Full algorithm 54% 77% Table 1: Relevance and correctness percentage by tested algorithm relation to the article based on our algorithm, but still instantiates it with a random pair. For each test article, we presented to the evaluators the questions generated by the three tested algorithms in a random order to avoid any bias. We note that our second baseline enabled us to measure the stand-alone contribution of the LDA-based relevance model. In addition, it enabled us to measure the relative contribution of the instantiation models on top of relevance model. Each article was evaluated by 10 MTurk workers, which were asked to mark for each displayed question whether it is relevant and whether it is correct (see Section 2 for relevance and correctness definitions). The workers were given precise instructions along with examples before they started the test. A control story was used to filter out dishonest or incapable workers5. 5.2 Results For each tested algorithm, we separately counted the percentage of annotations that marked each question as relevant and the percentage of annotations that marked each question as instantiated correctly, denoted relevance score and correctness score. We then averaged these scores over all questions that were displayed for the test articles. The results are presented in Table 1. The differences between the full algorithm and the baselines are statistically significant at p < 0.01 and between baselines the differences are statistically significant at p < 0.05 using the Wilcoxon double-sided signed-ranks test (Wilcoxon, 1945). Our main result is that our full algorithm substantially outperforms the stronger relevance baseline. It improves the correctness score by 45%, which points at the effectiveness of our two step filtering of incorrect instantiations. It’s performance is just under 80%, showing high quality entity pair selection for relations. Yet, we did not expect to see an increase of 46% in the relevance 5We intend to make the tested articles, the instructions to annotators and their annotations publicly available under Yahoo! WebscopeTM (http://webscope.sandbox. yahoo.com). metric, since both the full algorithm and the relevance baseline use the same relevance component to rank relations by. One explanation for this is that sometimes the instantiation filter eliminates all possible entity pairs for some relation that is incorrectly considered relevant by the algorithm. Thus, the filtering of entities provides also an additional filtering perspective on relevance. In addition, it may be that humans tend to be more permissive when assessing the relevance of a correctly instantiated question. To illustrate the differences between baselines and the full algorithm, Table 2 presents an example article together with the suggested generated questions by each algorithm. The random baseline picked an irrelevant relation, and while the relevance baseline selected a relevant relation, “a better president”, it was instantiated incorrectly. The full algorithm, on the other hand, both chose relevant relations for all three questions and instantiated them correctly. Especially, the incorrectly instantiated relation in the relevance baseline is now correctly instantiated with plausible presidential candidates. Comparing between baselines, the relevance baseline beats the random baseline by 28% in terms of relevance. This is not surprising, since this was the focus of this baseline. Yet, it also improved correctness by 23% over the random baseline. This is an unexpected result that indicates that when users view relevant relations, they may be more forgiving in their perception of unreasonable instantiations. For each article, our full algorithm attempts to generate three questions, one for each of the top three relevant questions. It is possible that for some articles not all three questions will be generated, due to instantiation filtering. We found that for 85% of the articles all three questions were generated. For the remaining 15% at least one question was always generated, and for 1 3 of them two questions were composed. Furthermore, we found that the relevance and correctness scores were not affected by the position of the question. In the case of instantiation correctness, since the best pair was picked for each relation and this component is quite accurate, this is somewhat expected. In the case of relevance, this indicates that there are usually several relations in our database that are relevant to the article. 748 Ron Livingston is teaming up with Tom Hanks and HBO again after their successful 2001 collaboration on Band of Brothers. The actor has been cast in HBO’s upcoming film Game Change that centers on the 2008 presidential campaign, Deadline reports. He joins Ed Harris, Julianne Moore and Woody Harrelson. The Jay Roach-directed movie follows John McCain (Harris) as he selects Alaska Gov. Sarah Palin (Moore) as his running mate, throughout the campaign and to their ultimate defeat to Barack Obama. Livingston will play Mark Wallace, one of the campaign’s senior advisors and the man who prepped Palin for her debate. Harrelson will play campaign strategist Steve Schmidt. . . Algorithm Question Random baseline Who is a better singer, Sarah Palin or Barack Obama ? Relevance baseline Would Ron Livingston be a better president than Julianne Moore ? Full algorithm Who has the best movies Tom Hanks or Julianne Moore ? Full algorithm Is John Mccain a better leader than Barack Obama ? Full algorithm Would Sarah Palin be a better president than John Mccain ? Table 2: Automatically generated questions by the baselines and the full algorithm to an example article 5.3 Error Analysis To better understand the performance of our algorithm, we looked at some low quality questions that were generated, either due to incorrect instantiation or due to irrelevance to the article. Starting with relevance, one of the repeating mistakes was promoting relations that are related to a list of named entities in the article, but not to its main theme. For example, the relation ‘who is a better actor’ was incorrectly ranked high for an article about Ricky Gervais claiming that he has been asked to host Globes again after he offended Angelina Jolie, Johnny Depp, Robert Downey Jr. and Charlie Sheen, among others during last Globes ceremony. The reason for this mistake is that many named entities appear as frequent terms in LDA topics, and thus mentioning many names that belong to a single topic drives LDA to assign this topic a high probability. Yet, unlike other cases, here entity filtering does not help ignoring such errors, since the same entities that triggered the ranking of the relation are also valid instantiations for it. Analyzing incorrect instantiations, many mistakes are due to mismatches between the two compared entities that were too fine grained for our algorithm to catch. For example, “who’s the better guitarist: Paul McCartney or Ringo Starr?” was generated since our algorithm failed to identify that Ringo Starr is a drummer rather than a guitarist, though both participants in the relation are musicians. In other cases, strong co-occurrence of the two celebs in our question corpus convinced the classifiers that they can be matched. For example, “who is a better dancer Michael Jackson or Debbie Rowe?” was incorrectly generated, since Debbie Rowe is not a dancer. Yet, she was Michael Jackson’s wife and they appear together in a lot of questions in our corpus. 6 Related Work Traditionally, question generation focuses on converting assertions in a text into question forms (Brown et al., 2005; Mitkov et al., 2006; Myller, 2007; Heilman and Smith, 2010; Rus et al., 2010; Agarwal et al., 2011; Olney et al., 2012). To the best of our knowledge, there is no prior work on our task, which is to generate relevant synthetic questions whose content, except for the arguments, might not appear in the text. Our extraction of comparable relations falls within the field of Relation Extraction, in which CRF is a state-of-the-art method (Mooney and Bunescu, 2005; Culotta et al., 2006). We note that in the works of Jindal and Liu (2006) and Li et. al. (2010) comparative questions are identified as an intermediate step for the task of extracting compared entities, which are unknown in their setting. We, on the other hand, detect the compared entities in a pre-processing step, and our target is the extraction of the comparable relations given known candidate entities. Our algorithm ranks relevant templates based on the similarity between an article’s content and the typical context of each relation. Prior work rank relevant concrete questions to a given input question, focusing on strong lexical similarities (Jeon et al., 2005; Cai et al., 2011; Hao and Agichtein, 2012). We, however, do not expect to find direct lexical similarities between candidate relations and the article. Instead, we are interested in a higher level topical similarity to the input article, for which LDA topics were shown to help (Celikyilmaz et al., 2010). Finally, several works present unsupervised methods for ranking proper template instantia749 tions, mainly as selectional preferences (Light and Greiff, 2002; Erk, 2007; Ritter et al., 2010). However, we eventually choose instantiation candidates, and thus preferred supervised methods that enable filtering and not just ranking. Furthermore, we target a more subtle discrimination between entities than prior work, e.g. between quarterbacks, singers and actors. Machine learning naturally incorporates the many features that capture different aspects of entity characterization. 7 Conclusions We introduced the novel task of automatically generating synthetic comparable questions that are relevant to a given news article but do not necessarily appear in it. To this end, we proposed an algorithm that consists of two parts. The offline part identifies comparable relations in a large collection of questions. Its output is a database of comparable relations together with a context profile for each relation and models that detect correct instantiations of this relation, all learned from the question corpus. In the online part, given a news article, the algorithm identifies relevant comparable relations based on the similarity between the article content and each relation’s context profile. Then, relevant relations are instantiated only with pairs of named entities from the article whose comparison makes sense by applying the instantiation correctness models to candidate pairs. We assessed the performance of our algorithm via a Mechanical Turk experiment. A partial version of our algorithm, without instantiation filtering, was our strongest baseline. The full algorithm outperformed this baseline by 45% on question correctness, but surprisingly also by 46% on question relevance. These results show that our supervised filtering methods are successful in keeping only correct pairs, but they also serve as an additional filtering for relevant relations, on top of context matching. In future work, we want to generate more diverse and intriguing questions by selecting relevant named entities for template instantiation that do not appear in the article. Another direction would be take a supervised approach, training classifiers over a labeled dataset for filtering irrelevant templates and incorrect instantiations. Finally, it would be interesting to see how our algorithm performs on other news domains. References Manish Agarwal, Rakshit Shah, and Prashanth Mannem. 2011. Automatic question generation using discourse cues. In Proceedings of the 6th Workshop on Innovative Use of NLP for Building Educational Applications, IUNLPBEA ’11, pages 1–9, Stroudsburg, PA, USA. Association for Computational Linguistics. David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. J. Mach. Learn. Res., 3:993–1022, March. Jonathan C. Brown, Gwen A. Frishkoff, and Maxine Eskenazi. 2005. Automatic question generation for vocabulary assessment. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, HLT ’05, pages 819–826, Stroudsburg, PA, USA. Association for Computational Linguistics. Li Cai, Guangyou Zhou, Kang Liu, and Jun Zhao. 2011. Learning the latent topics for question retrieval in community qa. In Proceedings of 5th International Joint Conference on Natural Language Processing, pages 273–281, Chiang Mai, Thailand, November. Asian Federation of Natural Language Processing. Asli Celikyilmaz, Dilek Hakkani-Tur, and Gokhan Tur. 2010. Lda based similarity modeling for question answering. In Proceedings of the NAACL HLT 2010 Workshop on Semantic Search, SS ’10, pages 1–9, Stroudsburg, PA, USA. Association for Computational Linguistics. Chih-Chung Chang and Chih-Jen Lin. 2011. Libsvm: A library for support vector machines. ACM TIST, 2(3):27. Aron Culotta, Andrew McCallum, and Jonathan Betz. 2006. Integrating probabilistic extraction models and data mining to discover relations and patterns in text. In Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, HLT-NAACL ’06, pages 296– 303, Stroudsburg, PA, USA. Association for Computational Linguistics. Katrin Erk. 2007. A simple, similarity-based model for selectional preferences. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 216–223, Prague, Czech Republic, June. Association for Computational Linguistics. Tianyong Hao and Eugene Agichtein. 2012. Finding similar questions in collaborative question answering archives: toward bootstrapping-based equivalent pattern learning. Inf. Retr., 15(3-4):332–353, June. Michael Heilman and Noah A. Smith. 2010. Good question! statistical ranking for question generation. 750 In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT ’10, pages 609–617, Stroudsburg, PA, USA. Association for Computational Linguistics. Jiwoon Jeon, W. Bruce Croft, and Joon Ho Lee. 2005. Finding similar questions in large question and answer archives. In Proceedings of the 14th ACM international conference on Information and knowledge management, CIKM ’05, pages 84–90, New York, NY, USA. ACM. Nitin Jindal and Bing Liu. 2006. Mining comparative sentences and relations. In proceedings of the 21st national conference on Artificial intelligence - Volume 2, AAAI’06, pages 1331–1336. AAAI Press. John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of ICML. Shasha Li, Chin-Yew Lin, Young-In Song, and Zhoujun Li. 2010. Comparable entity mining from comparative questions. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL ’10, pages 650–658. Association for Computational Linguistics. Marc Light and Warren R. Greiff. 2002. Statistical models for the induction and use of selectional preferences. Cognitive Science, 26(3):269–281. Tom M. Mitchell. 1997. Machine learning. McGraw Hill series in computer science. McGraw-Hill. Ruslan Mitkov, Le An Ha, and Nikiforos Karamanis. 2006. A computer-aided environment for generating multiple-choice test items. Nat. Lang. Eng., 12(2):177–194, June. Raymond J. Mooney and Razvan Bunescu. 2005. Mining knowledge from text using information extraction. SIGKDD Explor. Newsl., 7(1):3–10, June. Niko Myller. 2007. Automatic generation of prediction questions during program visualization. Electron. Notes Theor. Comput. Sci., 178:43–49, July. A.M. Olney, A.C. Graesser, and N.K. Person. 2012. Question generation from concept maps. Dialogue & Discourse, 3(2):75–99. D. Pollard. 2001. A User’s Guide to Measure Theoretic Probability. Cambridge University Press. F. Provost. 2000. Machine learning from imbalanced data sets 101. Proceedings of the AAAI-2000 Workshop on Imbalanced Data Sets. Alan Ritter, Mausam, and Oren Etzioni. 2010. A latent dirichlet allocation method for selectional preferences. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL ’10, pages 424–434, Stroudsburg, PA, USA. Association for Computational Linguistics. Vasile Rus, Brendan Wyse, Paul Piwek, Mihai C. Lintean, Svetlana Stoyanchev, and Cristian Moldovan. 2010. The first question generation shared task evaluation challenge. In John D. Kelleher, Brian Mac Namee, Ielka van der Sluis, Anja Belz, Albert Gatt, and Alexander Koller, editors, INLG 2010 - Proceedings of the Sixth International Natural Language Generation Conference, July 7-9, 2010, Trim, Co. Meath, Ireland. The Association for Computer Linguistics. Anne Schuth, Maarten Marx, and Maarten de Rijke. 2007. Extracting the discussion structure in comments on news-articles. In Proceedings of the 9th annual ACM international workshop on Web information and data management, WIDM ’07, pages 97–104, New York, NY, USA. ACM. Frank Wilcoxon. 1945. Individual comparisons by ranking methods. Biometrics Bulletin, 1:80–83. 751
2013
73
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 752–760, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Punctuation Prediction with Transition-based Parsing Dongdong Zhang1, Shuangzhi Wu2, Nan Yang3, Mu Li1 1Microsoft Research Asia, Beijing, China 2Harbin Institute of Technology, Harbin, China 3University of Science and Technology of China, Hefei, China {dozhang,v-shuawu,v-nayang,muli}@microsoft.com Abstract Punctuations are not available in automatic speech recognition outputs, which could create barriers to many subsequent text processing tasks. This paper proposes a novel method to predict punctuation symbols for the stream of words in transcribed speech texts. Our method jointly performs parsing and punctuation prediction by integrating a rich set of syntactic features when processing words from left to right. It can exploit a global view to capture long-range dependencies for punctuation prediction with linear complexity. The experimental results on the test data sets of IWSLT and TDT4 show that our method can achieve high-level performance in punctuation prediction over the stream of words in transcribed speech text. 1 Introduction Standard automatic speech recognizers output unstructured streams of words. They neither perform a proper segmentation of the output into sentences, nor predict punctuation symbols. The unavailable punctuations and sentence boundaries in transcribed speech texts create barriers to many subsequent processing tasks, such as summarization, information extraction, question answering and machine translation. Thus, the segmentation of long texts is necessary in many real applications. For example, in speech-to-speech translation, continuously transcribed speech texts need to be segmented before being fed into subsequent machine translation systems (Takezawa et al., 1998; Nakamura, 2009). This is because current machine translation (MT) systems perform the translation at the sentence level, where various models used in MT are trained over segmented sentences and many algorithms inside MT have an exponential complexity with regard to the length of inputs. The punctuation prediction problem has attracted research interest in both the speech processing community and the natural language processing community. Most previous work primarily exploits local features in their statistical models such as lexicons, prosodic cues and hidden event language model (HELM) (Liu et al., 2005; Matusov et al., 2006; Huang and Zweig, 2002; Stolcke and Shriberg, 1996). The word-level models integrating local features have narrow views about the input and could not achieve satisfied performance due to the limited context information access (Favre et al., 2008). Naturally, global contexts are required to model the punctuation prediction, especially for long-range dependencies. For instance, in English question sentences, the ending question mark is long-range dependent on the initial phrases (Lu and Ng, 2010), such as “could you” in Figure 1. There has been some work trying to incorporate syntactic features to broaden the view of hypotheses in the punctuation prediction models (Roark et al., 2006; Favre et al., 2008). In their methods, the punctuation prediction is treated as a separated post-procedure of parsing, which may suffer from the problem of error propagation. In addition, these approaches are not able to incrementally process inputs and are not efficient for very long inputs, especially in the cases of long transcribed speech texts from presentations where the number of streaming words could be larger than hundreds or thousands. In this paper, we propose jointly performing punctuation prediction and transition-based dependency parsing over transcribed speech text. When the transition-based parsing consumes the stream of words left to right with the shift-reduce decoding algorithm, punctuation symbols are predicted for each word based on the contexts of the parsing tree. Two models are proposed to cause the punctuation prediction to interact with the transition actions in parsing. One is to conduct transition actions of parsing followed by punctuation predictions in a cascaded way. The other is to associate the conventional transition actions of parsing with punctuation perditions, so that predicted punctuations are directly inferred from the 752 (a). The transcribed speech text without punctuations (b). Transition-based parsing trees and predicted punctuations over transcribed text (c). Two segmentations are formed when inserting the predicted punctuation symbols into the transcribed text Figure 1. An example of punctuation prediction. parsing tree. Our models have linear complexity and are capable of handling streams of words with any length. In addition, the computation of models use a rich set of syntactic features, which can improve the complicated punctuation predictions from a global view, especially for the long range dependencies. Figure 1 shows an example of how parsing helps punctuation prediction over the transcribed speech text. As illustrated in Figure 1(b), two commas are predicted when their preceding words act as the adverbial modifiers (advmod) during parsing. The period after the word “menu” is predicted when the parsing of an adverbial clause modifier (advcl) is completed. The question mark at the end of the input is determined when a direct object modifier (dobj) is identified, together with the long range clue that the auxiliary word occurs before the nominal subject (nsubj). Eventually, two segmentations are formed according to the punctuation prediction results, shown in Figure 1(c). The training data used for our models is adapted from Treebank data by excluding all punctuations but keeping the punctuation contexts, so that it can simulate the unavailable annotated transcribed speech texts. In decoding, beam search is used to get optimal punctuation prediction results. We conduct experiments on both IWSLT data and TDT4 test data sets. The experimental results show that our method can achieve higher performance than the CRF-based baseline method. The paper is structured as follows: Section 2 conducts a survey of related work. The transitionbased dependency parsing is introduced in Section 3. We explain our approach to predicting punctuations for transcribed speech texts in Section 4. Section 5 gives the results of our experiment. The conclusion and future work are given in Section 6. 2 Related Work Sentence boundary detection and punctuation prediction have been extensively studied in the speech processing field and have attracted research interest in the natural language processing field as well. Most previous work exploits local features for the task. Kim and Woodland (2001), Huang and Zweig (2002), Christensen et al. (2001), and Liu et al. (2005) integrate both prosodic features (pitch, pause duration, etc.) and lexical features (words, n-grams, etc.) to predict punctuation symbols during speech recognition, where Huang and Zweig (2002) uses a maximum entropy model, Christensen et al. (2001) focus on finite state and multi-layer perceptron methods, and Liu et al. (2005) uses conditional random fields. However, in some scenarios the prosodic cues are not available due to inaccessible original raw speech waveforms. Matusov et al. (2006) integrate segmentation features into the log-linear model in the statistical machine translation (SMT) framework to improve the translation performance when translating transcribed speech texts. Lu and Ng (2010) uses dynamic conditional random fields to perform both sentence boundary and sentence type prediction. They achieved promising results on both English and Chinese transcribed speech texts. The above work only exanyway you may find your favorite if you go through the menu so could you tell me your choice anyway you may find your favorite if you go through the menu so could you tell me your choice , N N N N N N N N N N . , N N N N N ? anyway, you may find your favorite if you go through the menu. so, could you tell me your choice? nsubj nsubj poss aux mark pobj iobj advmod advcl nsubj dobj det poss aux prep advmod dobj 753 ploits local features, so they were limited to capturing long range dependencies for punctuation prediction. It is natural to incorporate global knowledge, such as syntactic information, to improve punctuation prediction performance. Roark et al. (2006) use a rich set of non-local features including parser scores to re-rank full segmentations. Favre et al. (2008) integrate syntactic information from a PCFG parser into a log-linear and combine it with local features for sentence segmentation. The punctuation prediction in these works is performed as a post-procedure step of parsing, where a parse tree needs to be built in advance. As their parsing over the stream of words in transcribed speech text is exponentially complex, their approaches are only feasible for short input processing. Unlike these works, we incorporate punctuation prediction into the parsing which process left to right input without length limitations. Numerous dependency parsing algorithms have been proposed in the natural language processing community, including transition-based and graph-based dependency parsing. Compared to graph-based parsing, transition-based parsing can offer linear time complexity and easily leverage non-local features in the models (Yamada and Matsumoto, 2003; Nivre et al., 2006b; Zhang and Clark, 2008; Huang and Sagae, 2010). Starting with the work from (Zhang and Nivre, 2011), in this paper we extend transition-based dependency parsing from the sentence-level to the stream of words and integrate the parsing with punctuation prediction. Joint POS tagging and transition-based dependency parsing are studied in (Hatori et al., 2011; Bohnet and Nivre, 2012). The improvements are reported with the joint model compared to the pipeline model for Chinese and other richly inflected languages, which shows that it also makes sense to jointly perform punctuation prediction and parsing, although these two tasks of POS tagging and punctuation prediction are different in two ways: 1). The former usually works on a well-formed single sentence while the latter needs to process multiple sentences that are very lengthy. 2). POS tags are must-have features to parsing while punctuations are not. The parsing quality in the former is more sensitive to the performance of the entire task than in the latter. 3 Transition-based dependency parsing In a typical transition-based dependency parsing process, the shift-reduce decoding algorithm is applied and a queue and stack are maintained (Zhang and Nivre, 2011). The queue stores the stream of transcribed speech words, the front of which is indexed as the current word. The stack stores the unfinished words which may be linked with the current word or a future word in the queue. When words in the queue are consumed from left to right, a set of transition actions is applied to build a parse tree. There are four kinds of transition actions conducted in the parsing process (Zhang and Nivre, 2011), as described in Table 1. Action Description Shift Fetches the current word from the queue and pushes it to the stack Reduce Pops the stack LeftArc Adds a dependency link from the current word to the stack top, and pops the stack RightArc Adds a dependency link from the stack top to the current word, takes away the current word from the queue and pushes it to the stack Table 1. Action types in transition-based parsing The choice of each transition action during the parsing is scored by a linear model that can be trained over a rich set of non-local features extracted from the contexts of the stack, the queue and the set of dependency labels. As described in (Zhang and Nivre, 2011), the feature templates could be defined over the lexicons, POS-tags and the combinations with syntactic information. In parsing, beam search is performed to search the optimal sequence of transition actions, from which a parse tree is formed (Zhang and Clark, 2008). As each word must be pushed to the stack once and popped off once, the number of actions needed to parse a sentence is always 2n, where n is the length of the sentence. Thus, transitionbased parsing has a linear complexity with the length of input and naturally it can be extended to process the stream of words. 4 Our method 4.1 Model In the task of punctuation prediction, we are given a stream of words from an automatic transcription of speech text, denoted by 𝑤1 𝑛: = 𝑤1, 𝑤2, … , 𝑤𝑛. We are asked to output a sequence of punctuation symbols 𝑆1 𝑛:= 𝑠1, 𝑠2, … , 𝑠𝑛 where 𝑠𝑖 is attached to 𝑤𝑖 to form a sentence like Figure 1(c). If there are no ambiguities, 𝑆1 𝑛 is also abbreviated as 𝑆, 754 similarly for 𝑤1 𝑛 as 𝑤. We model the search of the best sequence of predicted punctuation symbols 𝑆∗ as: 𝑆∗= argmaxS𝑃(𝑆1 𝑛|𝑤1 𝑛) (1) We introduce the transition-based parsing tree 𝑇 to guide the punctuation prediction in Model (2), where parsing trees are constructed over the transcribed text while containing no punctuations. 𝑆∗= argmax𝑆∑𝑃(𝑇|𝑤1 𝑛) × 𝑃(𝑆1 𝑛|𝑇, 𝑤1 𝑛) 𝑇 (2) Rather than enumerate all possible parsing trees, we jointly optimize the punctuation prediction model and the transition-based parsing model with the form: (𝑆∗, 𝑇∗) = argmax(𝑆,𝑇)𝑃(𝑇|𝑤1 𝑛) × 𝑃(𝑆1 𝑛|𝑇, 𝑤1 𝑛) (3) Let 𝑇1 𝑖 be the constructed partial tree when 𝑤1 𝑖 is consumed from the queue. We decompose the Model (3) into: (𝑆∗, 𝑇∗) = argmax(𝑆,𝑇) ∏ 𝑃(𝑇1 𝑖|𝑇1 𝑖−1, 𝑤1 𝑖) × 𝑃(𝑠𝑖|𝑇1 𝑖, 𝑤1 𝑖) 𝑛 𝑖=1 (4) It is noted that a partial parsing tree uniquely corresponds to a sequence of transition actions, and vice versa. Suppose 𝑇1 𝑖 corresponds to the action sequence 𝐴1 𝑖 and let 𝑎𝑖 denote the last action in 𝐴1 𝑖. As the current word 𝑤𝑖 can only be consumed from the queue by either Shift or RightArc according to Table 1, we have 𝑎𝑖∈ {𝑆ℎ𝑖𝑓𝑡, 𝑅𝑖𝑔ℎ𝑡𝐴𝑟𝑐} . Thus, we synchronize the punctuation prediction with the application of Shift and RightArc during the parsing, which is explained by Model (5). (𝑆∗, 𝑇∗) = argmax(𝑆,𝑇) ∏ 𝑃(𝑇1 𝑖, 𝐴1 𝑖|𝑇1 𝑖−1, 𝑤1 𝑖) 𝑛 𝑖=1 × 𝑃(𝑠𝑖|𝑎𝑖, 𝑇1 𝑖, 𝑤1 𝑖) (5) The model is further refined by reducing the computation scope. When a full-stop punctuation is determined (i.e., a segmentation is formed), we discard the previous contexts and restart a new 1 Specially, 𝑏𝑖 is equal to 1 if there are no previous full-stop punctuations. procedure for both parsing and punctuation prediction over the rest of words in the stream. In this way we are theoretically able to handle the unlimited stream of words without needing to always keep the entire context history of streaming words. Let 𝑏𝑖 be the position index of last full-stop punctuation1 before 𝑖, 𝑇𝑏𝑖 𝑖 and 𝐴𝑏𝑖 𝑖the partial tree and corresponding action sequence over the words 𝑤𝑏𝑖 𝑖, Model (5) can be rewritten by: (𝑆∗,𝑇∗) = argmax(𝑆,𝑇) ∏ 𝑃(𝑇𝑏𝑖 𝑖,𝐴𝑏𝑖 𝑖|𝑇𝑏𝑖 𝑖−1,𝑤𝑏𝑖 𝑖) × 𝑛 𝑖=1 𝑃(𝑠𝑖|𝑎𝑖, 𝑇𝑏𝑖 𝑖, 𝑤𝑏𝑖 𝑖) (6) With different computation of Model (6), we induce two joint models for punctuation prediction: the cascaded punctuation prediction model and the unified punctuation prediction model. 4.2 Cascaded punctuation prediction model (CPP) In Model (6), the computation of two sub-models is independent. The first sub-model is computed based on the context of words and partial trees without any punctuation knowledge, while the computation of the second sub-model is conditional on the context from the partially built parsing tree 𝑇𝑏𝑖 𝑖 and the transition action. As the words in the stream are consumed, each computation of transition actions is followed by a computation of punctuation prediction. Thus, the two sub-models are computed in a cascaded way, until the optimal parsing tree and optimal punctuation symbols are generated. We call this model the cascaded punctuation prediction model (CPP). 4.3 Unified punctuation prediction model (UPP) In Model (6), if the punctuation symbols can be deterministically inferred from the partial tree, 𝑃(𝑠𝑖|𝑎𝑖, 𝑇𝑏𝑖 𝑖, 𝑤𝑏𝑖 𝑖) can be omitted because it is always 1. Similar to the idea of joint POS tagging and parsing (Hatori et al., 2011; Bohnet and Nivre, 2012), we propose attaching the punctuation prediction onto the parsing tree by embedding 𝑠𝑖 into 𝑎𝑖. Thus, we extend the conventional transition actions illustrated in Table 1 to a new set of transition actions for the parsing, denoted by 𝐴̂: 755 𝐴̂ = {𝐿𝑒𝑓𝑡𝐴𝑟𝑐, 𝑅𝑒𝑑𝑢𝑐𝑒} ∪{𝑆ℎ𝑖𝑓𝑡(𝑠)|𝑠∈𝑄} ∪{𝑅𝑖𝑔ℎ𝑡𝐴𝑟𝑐(𝑠)|𝑠∈𝑄} where Q is the set of punctuation symbols to be predicted, 𝑠 is a punctuation symbol belonging to Q, Shift(s) is an action that attaches s to the current word on the basis of original Shift action in parsing, RightArc(s) attaches 𝑠 to the current word on the basis of original RightArc action. With the redefined transition action set 𝐴̂, the computation of Model (6) is reformulated as: (𝑆∗,𝑇∗) = argmax(𝑆,𝑇) ∏ 𝑃(𝑇𝑏𝑖 𝑖,𝐴̂ 𝑏𝑖 𝑖|𝑇𝑏𝑖 𝑖−1, 𝐴̂𝑏𝑖 𝑖−1, 𝑤𝑏𝑖 𝑖) 𝑛 𝑖=1 (7) Here, the computation of parsing tree and punctuation prediction is unified into one model where the sequence of transition action outputs uniquely determines the punctuations attached to the words. We refer to it as the unified punctuation prediction model (UPP). (a). Parsing tree and attached punctuation symbols Shift(,), Shift(N), Shift(N), LeftArc, LeftArc, LeftArc, Shift(N), RightArc(?), Reduce, Reduce (b). The corresponding sequence of transition actions Figure 2. An example of punctuation prediction using the UPP model, where N is a null type punctuation symbol denoting no need to attach any punctuation to the word. Figure 2 illustrates an example how the UPP model works. Given an input “so could you tell me”, the optimal sequence of transition actions in Figure 2(b) is calculated based on the UPP model to produce the parsing tree in Figure 2(a). According to the sequence of actions, we can determine the sequence of predicted punctuation symbols like “,NNN?” that have been attached to the words shown in Figure 2(a). The final segmentation with the predicted punctuation insertion could be “so, could you tell me?”. 4.4 Model training and decoding In practice, the sub-models in Model (6) and (7) with the form of 𝑃(𝑌|𝑋) is computed with a linear model 𝑆𝑐𝑜𝑟𝑒(𝑌, 𝑋) as 𝑆𝑐𝑜𝑟𝑒(𝑌, 𝑋) = 𝛷(𝑌, 𝑋) ∙𝜆⃗ where 𝛷(𝑌, 𝑋) is the feature vector extracted from the output 𝑌 and the context 𝑋, and 𝜆⃗ is the weight vector. For the features of the models, we incorporate the bag of words and POS tags as well as tree-based features shown in Table 2, which are the same as those defined in (Zhang and Nivre, 2011). (a) ws; w0; w1; w2; ps; p0; p1; p2; wsps; w0p0; w1p1; w2p2; wspsw0p0; wspsw0; wspsp0; wsw0p0; psw0p0; wsw0; psp0; p0p1; psp0p1; p0p1p2; (b) pshpsp0; pspslp0; pspsrp0; psp0p0l; wsd; psd; w0d; p0d; wsw0d; psp0d; wsvl; psvl; wsvr; psvr; w0vl; p0vl; wsh; psh; ts; w0l; p0l; t0l; w0r; p0r; t0r; w1l; p1l; t1l; wsh2; psh2; tsh; wsl2; psl2; tsl2; wsr2; psr2; tsr2; w0l2; p0l2; t0l2; pspslpsl2; pspsrpsr2; pspshpsh2; p0p0lp0l2; wsTl; psTl; wsTr; psTr; w0Tl; p0Tl; Table 2. (a) Features of the bag of words and POS tags. (b). Tree-based features. wword; pPOS tag; ddistance between ws and w0; vnumber of modifiers; tdependency label; Tset of dependency labels; s, 0, 1 and 2 index the stack top and three front items in the queue respectively; hhead; lleft/leftmost; rright/rightmost; h2head of a head; l2second leftmost; r2second rightmost. The training data for both the CPP and UPP models need to contain parsing trees and punctuation information. Due to the absence of annotation over transcribed speech data, we adapt the Treebank data for the purpose of model training. To do this, we remove all types of syntactic information related to punctuation symbols from the raw Treebank data, but record what punctuation symbols are attached to the words. We normalize various punctuation symbols into two types: Middle-paused punctuation (M) and Full-stop punctuation (F). Plus null type (N), there are three kinds of punctuation symbols attached to the words. Table 3 illustrates the normalizations of punctuation symbols. In the experiments, we did not further distinguish the type among full-stop punctuation because the question mark and the exclamation mark have very low frequency in Treebank data. so could you tell me , N N N ? nsubj iobj aux advmod 756 But our CPP and UPP models are both independent regarding the number of punctuation types to be predicted. Punctuations Normalization Period, question mark, exclamation mark Full-stop punctuation (F) Comma, Colon, semicolon Middle-paused punctuation (M) Multiple Punctuations (e.g., !!!!?) Full-stop punctuation (F) Quotations, brackets, etc. Null (N) Table 3. Punctuation normalization in training data As the feature templates are the same for the model training of both CPP and UPP, the training instances of CPP and UPP have the same contexts but with different outputs. Similar to work in (Zhang and Clark, 2008; Zhang and Nivre, 2011), we train CPP and UPP by generalized perceptron (Collins, 2002). In decoding, beam search is performed to get the optimal sequence of transition actions in CPP and UPP, and the optimal punctuation symbols in CPP. To ensure each segment decided by a fullstop punctuation corresponds to a single parsing tree, two constraints are applied in decoding for the pruning of deficient search paths. (1) Proceeding-constraint: If the partial parsing result is not a single tree, the full-stop punctuation prediction in CPP cannot be performed. In UPP, if Shift(F) or RightArc(F) fail to result in a single parsing tree, they cannot be performed as well. (2) Succeeding-constraint: If the full-stop punctuation is predicted in CPP, or Shift(F) and RightArc(F) are performed in UPP, the following transition actions must be a sequence of Reduce actions until the stack becomes empty. 5 Experiments 5.1 Experimental setup Our training data of transition-based dependency trees are converted from phrasal structure trees in English Web Treebank (LDC2012T13) and the English portion of OntoNotes 4.0 (LDC2011T03) by the Stanford Conversion toolkit (Marneffe et al., 2006). It contains around 1.5M words in total and consist of various genres including weblogs, web texts, newsgroups, email, reviews, questionanswer sessions, newswires, broadcast news and broadcast conversations. To simulate the transcribed speech text, all words in dependency trees are lowercased and punctuations are excluded before model training. In addition, every ten dependency trees are concatenated sequentially to simulate a parsing result of a stream of words in the model training. There are two test data sets used in our experiments. One is the English corpus of the IWSLT09 evaluation campaign (Paul, 2009) that is the conversional speech text. The other is a subset of the TDT4 English data (LDC2005T16) which consists of 200 hours of closed-captioned broadcast news. In the decoding, the beam size of both the transition-based parsing and punctuation prediction is set to 5. The part-of-speech tagger is our re-implementation of the work in (Collins, 2002). The evaluation metrics of our experiments are precision (prec.), recall (rec.) and F1-measure (F1). For the comparison, we also implement a baseline method based on the CRF model. It incorporates the features of bag of words and POS tags shown in Table 2(a), which are commonly used in previous related work. 5.2 Experimental results We test the performance of our method on both the correctly recognized texts and automatically recognized texts. The former data is used to evaluate the capability of punctuation prediction of our algorithm regardless of the noises from speech data, as our model training data come from formal text instead of transcribed speech data. The usage of the latter test data set aims to evaluate the effectiveness of our method in real applications where lots of substantial recognition errors could be contained. In addition, we also evaluate the quality of our transition-based parsing, as its performance could have a big influence on the quality of punctuation prediction. 5.2.1 Performance on correctly recognized text The evaluation of our method on correctly recognized text uses 10% of IWSLT09 training set, which consists of 19,972 sentences from BTEC (Basic Travel Expression Corpus) and 10,061 sentences from CT (Challenge Task). The average input length is about 10 words and each input contains 1.3 sentences on average. The evaluation results are presented in Table 4. 757 Measure MiddlePaused Full-stop Mixed Baseline (CRF) prec. 33.2% 81.5% 78.8% rec. 25.9% 83.8% 80.7% F1 29.1% 82.6% 79.8% CPP prec. 51% 89% 89.6% rec. 50.3% 93.1% 92.7% F1 50.6% 91% 91.1% UPP prec. 52.6% 93.2% 92% rec. 59.7% 91.3% 92.3% F1 55.9% 92.2% 92.2% Table 4. Punctuation prediction performance on correctly recognized text We achieved good performance on full-stop punctuation compared to the baseline, which shows our method can efficiently process sentence segmentation because each segment is decided by the structure of a single parsing tree. In addition, the global syntactic knowledge used in our work help capture long range dependencies of punctuations. The performance of middle-paused punctuation prediction is fairly low between all methods, which shows predicting middle-paused punctuations is a difficult task. This is because the usage of middle-paused punctuations is very flexible, especially in conversional data. The last column in Table 4 presents the performance of the pure segmentation task where the middle-paused and full-stop punctuations are mixed and not distinguished. The performance of our method is much higher than that of the baseline, which shows our method is good at segmentation. We also note that UPP yields slightly better performance than CPP on full-stop and mixed punctuation prediction, and much better performance on middle-paused punctuation prediction. This could be because the interaction of parsing and punctuation prediction is closer together in UPP than in CPP. 5.2.2 Performance on automatically recognized text Table 5 shows the experimental results of punctuation prediction on automatically recognized text from TDT4 data that is recognized using SRI’s English broadcast news ASR system where the word error rate is estimated to be 18%. As the annotation of middle-paused punctuations in TDT4 is not available, we can only evaluate the performance of full-stop punctuation prediction (i.e., detecting sentence boundaries). Thus, we merge every three sentences into one single input before performing full-stop prediction. The average input length is about 43 words. Measure Full-stop Baseline (CRF) prec. 37.7% rec. 60.7% F1 46.5% CPP prec. 63% rec. 58.6% F1 60.2% UPP prec. 73.9% rec. 51.6% F1 60.7% Table 5. Punctuation prediction performance on automatically recognized text Generally, the performance shown in Table 5 is not as high as that in Table 4. This is because the speech recognition error from ASR systems degrades the capability of model prediction. Another reason might be that the domain and style of our training data mismatch those of TDT4 data. The baseline gets a little higher recall than our method, which shows the baseline method tends to make aggressive segmentation decisions. However, both precision and F1 score of our method are much higher than the baseline. CPP has higher recall than UPP, but with lower precision and F1 score. This is in line with Table 4, which consistently illustrates CPP can get higher recall on fullstop punctuation prediction for both correctly recognized and automatically recognized texts. 5.2.3 Performance of transition-based parsing Performance of parsing affects the quality of punctuation prediction in our work. In this section, we separately evaluate the performance of our transition-based parser over various domains including the Wall Street Journal (WSJ), weblogs, newsgroups, answers, email messages and reviews. We divided annotated Treebank data into three data sets: 90% for model training, 5% for the development set and 5% for the test set. The accuracy of our POS-tagger achieves 96.71%. The beam size in the decoding of both our POS-tagging and parsing is set to 5. Table 6 presents the results of our experiments on the measures of UAS and LAS, where the overall accuracy is obtained from a general model which is trained over the combination of the training data from all domains. 758 We first evaluate the performance of our transition-based parsing over texts containing punctuations (TCP). The evaluation results show that our transition-based parser achieves state-of-the-art performance levels, referring to the best dependency parsing results reported in the shared task of SANCL 2012 workshop2, although they cannot be compared directly due to the different training data and test data sets used in the experiments. Secondly, we evaluate our parsing model in CPP over the texts without punctuations (TOP). Surprisingly, the performance over TOP is better than that over TCP. The reason could be that we cleaned out data noises caused by punctuations when preparing TOP data. These results illustrate that the performance of transition-based parsing in our method does not degrade after being integrated with punctuation prediction. As a by-product of the punctuation prediction task, the outputs of parsing trees can benefit the subsequent text processing tasks. Data sets UAS LAS Texts containing punctuations (TCP) WSJ 92.6% 90.3% Weblogs 90.7% 88.2% Answers 89.4% 85.7% Newsgroups 90.1% 87.6% Reviews 90.9% 88.4% Email Messages 89.6% 87.1% Overall 90.5% 88% Texts without punctuations (TOP) WSJ 92.6% 91.1% Weblogs 92.5% 91.1% Answers 95% 94% Newsgroups 92.6% 91.2% Reviews 92.6% 91.2% Email Messages 92.9% 91.7% Overall 92.6% 91.2% Table 6. The performance of our transition-based parser on written texts. UAS=unlabeled attachment score; LAS=labeled attachment score 6 Conclusion and Future Work In this paper, we proposed a novel method for punctuation prediction of transcribed speech texts. Our approach jointly performs parsing and punctuation prediction by integrating a rich set of syntactic features. It can not only yield parse trees, but also determine sentence boundaries and predict punctuation symbols from a global view of the in 2 https://sites.google.com/site/sancl2012/home/sharedtask/results puts. The proposed algorithm has linear complexity in the size of input, which can efficiently process the stream of words from a purely text processing perspective without the dependences on either the ASR systems or subsequent tasks. The experimental results show that our approach outperforms the CRF-based method on both the correctly recognized and automatically recognized texts. In addition, the performance of the parsing over the stream of transcribed words is state-ofthe-art, which can benefit many subsequent text processing tasks. In future work, we will try our method on other languages such as Chinese and Japanese, where Treebank data is available. We would also like to test the MT performance over transcribed speech texts with punctuation symbols inserted based on our method proposed in this paper. References B. Bohnet and J. Nivre. 2012. A transition-based system for joint part-of-speech tagging and labeled non-projective dependency parsing. In Proc. EMNLP-CoNLL 2012. H. Christensen, Y. Gotoh, and S. Renals. 2001. Punctuation annotation using statistical prosody models. In Proc. of ISCA Workshop on Prosody in Speech Recognition and Understanding. M. Collins. 2002. Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms. In Proc. EMNLP’02, pages 1-8. B. Favre, R. Grishman, D. Hillard, H. Ji, D. HakkaniTur, and M. Ostendorf. 2008. Punctuating speech for information extraction. In Proc. of ICASSP’08. B. Favre, D. HakkaniTur, S. Petrov and D. Klein. 2008. Efficient sentence segmentation using syntactic features. In Spoken Language Technologies (SLT). A. Gravano, M. Jansche, and M. Bacchiani. 2009. Restoring punctuation and capitalization in transcribed speech. In Proc. of ICASSP’09. J. Hatori, T. Matsuzaki, Y. Miyao and J. Tsujii. 2011. Incremental joint POS tagging and dependency parsing in Chinese. In Proc. Of IJCNLP’11. J. Huang and G. Zweig. 2002. Maximum entropy model for punctuation annotation from speech. In Proc. Of ICSLP’02. 759 J.H. Kim and P.C. Woodland. 2001. The use of prosody in a combined system for punctuation generation and speech recognition. In Proc. of EuroSpeech’01. Y. Liu, A. Stolcke, E. Shriberg, and M. Harper. 2005. Using conditional random fields for sentence boundary detection in speech. In Proc. of ACL’05. W. Lu and H.T. Ng. 2010. Better Punctuation Prediction with Dynamic Conditional Random Fields. In Proc. Of EMNLP’10. Pages 177-186. M. Marneffe, B. MacCartney, C.D. Maning. 2006. Generating Typed Dependency Parses from Phrase Structure Parses. In Proc. LREC’06. E. Matusov, A. Mauser, and H. Ney. 2006. Automatic sentence segmentation and punctuation prediction for spoken language translation. In Proc. of IWSLT’06. S. Nakamura. 2009. Overcoming the language barrier with speech translation technology. In Science & Technology Trends - Quarterly Review. No. 31. April 2009. J. Nivre. 2003. An efficient algorithm for projective dependency parsing. In Proceedings of IWPT, pages 149–160, Nancy, France. J. Nivre and M. Scholz. 2004. Deterministic dependency parsing of English text. In Proc. COLING’04. M. Paul. 2009. Overview of the IWSLT 2009 Evaluation Campaign. In Proceedings of IWSLT’09. B. Roark, Y. Liu, M. Harper, R. Stewart, M. Lease, M. Snover, I. Shafran, B. Dorr, J. Hale, A. Krasnyanskaya, and L. Yung. 2006. Reranking for sentence boundary detection in conversational speech. In Proc. ICASSP, 2006. A. Stolcke and E. Shriberg, “Automatic linguistic segmentation of conversational speech,” Proc. ICSLP, vol. 2, 1996. A. Stolcke, E. Shriberg, R. Bates, M. Ostendorf, D. Hakkani, M. Plauche, G. Tur, and Y. Lu. 1998. Automatic detection of sentence boundaries and disfluencies based on recognized words. In Proc. of ICSLP’ 98. Takezawa, T. Morimoto, T. Sagisaka, Y. Campbell, N. Iida, H. Sugaya, F. Yokoo, A. Yamamoto, Seiichi. 1998. A Japanese-to-English speech translation system: ATR-MATRIX. In Proc. ICSLP’98. Y. Zhang and J. Nivre. 2011. Transition-based Dependency Parsing with Rich Non-local Features. In Proc. of ACL’11, pages 188-193. Y. Zhang and S. Clark. A Tale of Two Parsers: investigating and combing graph-based and transitionbased dependency parsing using beam-search. 2008. In Proc. of EMNLP’08, pages 562-571. 760
2013
74
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 761–769, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Discriminative Learning with Natural Annotations: Word Segmentation as a Case Study Wenbin Jiang 1 Meng Sun 1 Yajuan L¨u 1 Yating Yang 2 Qun Liu 3, 1 1Key Laboratory of Intelligent Information Processing Institute of Computing Technology, Chinese Academy of Sciences {jiangwenbin, sunmeng, lvyajuan}@ict.ac.cn 2Multilingual Information Technology Research Center The Xinjiang Technical Institute of Physics & Chemistry, Chinese Academy of Sciences [email protected] 3Centre for Next Generation Localisation Faculty of Engineering and Computing, Dublin City University [email protected] Abstract Structural information in web text provides natural annotations for NLP problems such as word segmentation and parsing. In this paper we propose a discriminative learning algorithm to take advantage of the linguistic knowledge in large amounts of natural annotations on the Internet. It utilizes the Internet as an external corpus with massive (although slight and sparse) natural annotations, and enables a classifier to evolve on the large-scaled and real-time updated web text. With Chinese word segmentation as a case study, experiments show that the segmenter enhanced with the Chinese wikipedia achieves significant improvement on a series of testing sets from different domains, even with a single classifier and local features. 1 Introduction Problems related to information retrieval, machine translation and social computing need fast and accurate text processing, for example, word segmentation and parsing. Taking Chinese word segmentation for example, the state-of-the-art models (Xue and Shen, 2003; Ng and Low, 2004; Gao et al., 2005; Nakagawa and Uchimoto, 2007; Zhao and Kit, 2008; Jiang et al., 2009; Zhang and Clark, 2010; Sun, 2011b; Li, 2011) are usually trained on human-annotated corpora such as the Penn Chinese Treebank (CTB) (Xue et al., 2005), and perform quite well on corresponding test sets. Since the text used for corpus annotating are usually drawn from specific fields (e.g. newswire or finance), and the annotated corpora are limited in think that NLP has already ... n ఊ น ᆑ ௶ ဴ ཝ ҉ स ྸ ࠼ n i-1 i j j+1 (a) Natural annotation by hyperlink n ఊ น ᆑ ௶ ဴ ཝ ҉ स ྸ ࠼ n i-1 i j j+1 n ఊ น ᆑ ௶ ဴ ཝ ҉ स ྸ ࠼ n i-1 i j j+1 (b) Knowledge for word segmentation (c) Knowledge for dependency parsing Figure 1: Natural annotations for word segmentation and dependency parsing. size (e.g. tens of thousands), the performance of word segmentation tends to degrade sharply when applied to new domains. Internet provides large amounts of raw text, and statistics collected from it have been used to improve parsing performance (Nakov and Hearst, 2005; Pitler et al., 2010; Bansal and Klein, 2011; Zhou et al., 2011). The Internet also gives massive (although slight and sparse) natural annotations in the forms of structural information including hyperlinks, fonts, colors and layouts (Sun, 2011a). These annotations usually imply valuable knowledge for problems such as word segmentation and parsing, based on the hypothesis that the subsequences marked by structural information are meaningful fragments in sentences. Figure 1 shows an example. The hyperlink indicates 761 a Chinese phrase (meaning NLP), and it probably corresponds to a connected sub-graph for dependency parsing. Creators of web text give valuable annotations during editing, the whole Internet can be treated as a wide-coveraged and real-time updated corpus. Different from the dense and accurate annotations in human-annotated corpora, natural annotations in web text are sparse and slight, it makes direct training of NLP models impracticable. In this work we take for example a most important problem, word segmentation, and propose a novel discriminative learning algorithm to leverage the knowledge in massive natural annotations of web text. Character classification models for word segmentation usually factorize the whole prediction into atomic predictions on characters (Xue and Shen, 2003; Ng and Low, 2004). Natural annotations in web text can be used to get rid of implausible predication candidates for related characters, knowledge in the natural annotations is therefore introduced in the manner of searching space pruning. Since constraint decoding in the pruned searching space integrates the knowledge of the baseline model and natural annotations, it gives predictions not worse than the normal decoding does. Annotation differences between the outputs of constraint decoding and normal decoding are used to train the enhanced classifier. This strategy makes the usage of natural annotations simple and universal, which facilitates the utilization of massive web text and the extension to other NLP problems. Although there are lots of choices, we choose the Chinese wikipedia as the knowledge source due to its high quality. Structural information, including hyperlinks, fonts and colors are used to determine the boundaries of meaningful fragments. Experimental results show that, the knowledge implied in the natural annotations can significantly improve the performance of a baseline segmenter trained on CTB 5.0, an F-measure increment of 0.93 points on CTB test set, and an average increment of 1.53 points on 7 other domains. It is an effective and inexpensive strategy to build word segmenters adaptive to different domains. We hope to extend this strategy to other NLP problems such as named entity recognition and parsing. In the rest of the paper, we first briefly introduce the problems of Chinese word segmentation and the character classification model in section Type Templates Instances n-gram C−2 C−2=@ C−1 C−1= C0 C0=g C1 C1=, C2 C2=Š C−2C−1 C−2C−1=@ C−1C0 C−1C0=g C0C1 C0C1=g, C1C2 C1C2=,Š C−1C1 C−1C1=, function Pu(C0) Pu(C0)=false T(C−2:2) T(C−2:2)= 44444 Table 1: Feature templates and instances for character classification-based word segmentation model. Suppose we are considering the i-th character “g” in “...@ g ,Šó?n®²...”. 2, then describe the representation of the knowledge in natural annotations of web text in section 3, and finally detail the strategy of discriminative learning on natural annotations in section 4. After giving the experimental results and analysis in section 5, we briefly introduce the previous related work and then give the conclusion and the expectation of future research. 2 Character Classification Model Character classification models for word segmentation factorize the whole prediction into atomic predictions on single characters (Xue and Shen, 2003; Ng and Low, 2004). Although natural annotations in web text do not directly support the discriminative training of segmentation models, they do get rid of the implausible candidates for predictions of related characters. Given a sentence as a sequence of n characters, word segmentation splits the sequence into m(≤n) subsequences, each of which indicates a meaningful word. Word segmentation can be formalized as a character classification problem (Xue and Shen, 2003), where each character in the sentence is given a boundary tag representing its position in a word. We adopt the boundary tags of Ng and Low (2004), b, m, e and s, where b, m and e mean the beginning, the middle and the end of a word, and s indicates a single-character word. the decoding procedure searches for the labeled character sequence y that maximizes the score func762 Algorithm 1 Perceptron training algorithm. 1: Input: Training corpus C 2: ⃗α ←0 3: for t ←1 .. T do ⊲T iterations 4: for (x, ˜y) ∈C do 5: y ←arg maxy Φ(x, y) · ⃗α 6: if y ̸= ˜y then 7: ⃗α ←⃗α + Φ(x, ˜y) −Φ(x, y) 8: Output: Parameters ⃗α tion: f(x) = arg max y S(y|⃗α, Φ, x) = arg max y Φ(x, y) · ⃗α = arg max y X (i,t)∈y Φ(i, t, x, y) · ⃗α (1) The score of the whole sequence y is accumulated across all its character-label pairs, (i, t) ∈y (s.t. 1 ≤i ≤n and t ∈{b, m, e, s}). The feature function Φ maps a labeled sequence or a characterlabel pair into a feature vector, ⃗α is the parameter vector and Φ(x, y) · ⃗α is the inner product of Φ(x, y) and ⃗α. Analogous to other sequence labeling problems, word segmentation can be solved through a viterbi-style decoding procedure. We omit the decoding algorithm in this paper due to its simplicity and popularity. The feature templates for the classifier is shown in Table 1. C0 denotes the current character, while C−k/Ck denote the kth character to the left/right of C0. The function Pu(·) returns true for a punctuation character and false for others, the function T(·) classifies a character into four types, 1, 2, 3 and 4, representing number, date, English letter and others, respectively. The classifier can be trained with online learning algorithms such as perceptron, or offline learning models such as support vector machines. We choose the perceptron algorithm (Collins, 2002) to train the classifier for the character classification-based word segmentation model. It learns a discriminative model mapping from the inputs x ∈X to the outputs ˜y ∈Y , where X is the set of sentences in the training corpus and Y is the set of corresponding labeled results. Algorithm 1 shows the perceptron algorithm for tuning the parameter ⃗α. The “averaged parameters” technology (Collins, 2002) is used for better performance. n ఊ น ᆑ ௶ ဴ ཝ ҉ स ྸ ࠼ n i-1 i j j+1 (a) Original searching space n n n n n n n n b m e s b m e s b m e s b m e s b m e s b m e s b m e s b m e s b m e s b m e s n ఊ น ᆑ ௶ ဴ ཝ ҉ स ྸ ࠼ n i-1 i j j+1 (b) Shrinked searching space n n n n n n n n b m e s e s b s b m e s b m e s b m e s b m e s e s b s b m e s Figure 2: Shrink of searching space for the character classification-based word segmentation model. 3 Knowledge in Natural Annotations Web text gives massive natural annotations in the form of structural informations, including hyperlinks, fonts, colors and layouts (Sun, 2011a). Although slight and sparse, these annotations imply valuable knowledge for problems such as word segmentation and parsing. As shown in Figure 1, the subsequence P = i..j of sentence S is composed of bolded characters determined by a hyperlink. Such natural annotations do not clearly give each character a boundary tag, or define the head-modifier relationship between two words. However, they do help to shrink the set of plausible predication candidates for each character or word. For word segmentation, it implies that characters i −1 and j are the rightmost characters of words, while characters i and j + 1 are the leftmost characters of words. For i −1 or j, the plausible predication set Ψ becomes {e, s}; For i and j + 1, it becomes {b, s}; For other characters c except the two at sentence boundaries, Ψ(c) is still {b, m, e, s}. For dependency parsing, the subsequence P tends to form a connected dependency graph if it contains more than one word. Here we use Ψ to denote the set of plausible head of a word (modifier). There must be a single word w ∈P as the root of subsequence P, whose plausible heads fall out of P, that is, Ψ(w) = {x|x ∈S −P}. For the words in P except the root, the plausible heads for each 763 Algorithm 2 Perceptron learning with natural annotations. 1: ⃗α ←TRAIN(C) 2: for x ∈F do 3: y ←DECODE(x, ⃗α) 4: ˜y ←CONSTRAINTDECODE(x, ⃗α, Ψ) 5: if y ̸= ˜y then 6: C′ ←C′ ∪{˜y} 7: ⃗α ←TRAIN(C ∪C′) word w are the words in P except w itself, that is, Ψ(w) = {x|x ∈P −{w}}. Creators of web text give valuable structural annotations during editing, these annotations reduce the predication uncertainty for atomic characters or words, although not exactly defining which predication is. Figure 2 shows an example for word segmentation, depicting the shrink of searching space for the character classificationbased model. Since the decrement of uncertainty indicates the increment of knowledge, the whole Internet can be treated as a wide-coveraged and real-time updated corpus. We choose the Chinese wikipedia as the external knowledge source, and structural information including hyperlinks, fonts and colors are used in the current work due to their explicitness of representation. 4 Learning with Natural Annotations Different from the dense and accurate annotations in human-annotated corpora, natural annotations are sparse and slight, which makes direct training of NLP models impracticable. Annotations implied by structural information do not give an exact predication to a character, however, they help to get rid of the implausible predication candidates for related characters, as described in the previous section. Previous work on constituency parsing or machine translation usually resort to some kinds of heuristic tricks, such as punctuation restrictions, to eliminate some implausible candidates during decoding. Here the natural annotations also bring knowledge in the manner of searching space pruning. Conditioned on the completeness of the decoding algorithm, a model trained on an existing corpus probably gives better or at least not worse predications, by constraint decoding in the pruned searching space. The constraint decoding procedure integrates the knowledge of the baseline Algorithm 3 Online version of perceptron learning with natural annotations. 1: ⃗α ←TRAIN(C) 2: for x with natural annotations do 3: y ←DECODE(x, ⃗α) 4: ˜y ←CONSTRAINTDECODE(x, ⃗α, Ψ) 5: if y ̸= ˜y then 6: ⃗α ←⃗α + Φ(x, ˜y) −Φ(x, y) 7: output ⃗α at regular time model and natural annotations, the predication differences between the outputs of constraint decoding and normal decoding can be used to train the enhanced classifier. Restrictions of the searching space according to natural annotations can be easily incorporated into the decoder. If the completeness of the searching algorithm can be guaranteed, the constraint decoding in the pruned searching space will give predications not worse than those given by the normal decoding. If a predication of constraint decoding differs from that of normal decoding, it indicates that the annotation precision is higher than the latter. Furthermore, the degree of difference between the two predications represents the amount of new knowledge introduced by the natural annotations over the baseline. The baseline model ⃗α is trained on an existing human-annotated corpus. A set of sentences F with natural annotations are extracted from the Chinese wikipedia, and we reserve the ones for which constraint decoding and normal decoding give different predications. The predictions of reserved sentences by constraint decoding are used as additional training data for the enhanced classifier. The overall training pipeline is analogous to self-training (McClosky et al., 2006), Algorithm 2 shows the pseudo-codes. Considering the online characteristic of the perceptron algorithm, if we are able to leverage much more (than the Chinese wikipedia) data with natural annotations, an online version of learning procedure shown in Algorithm 3 would be a better choice. The technology of “averaged parameters” (Collins, 2002) is easily to be adapted here for better performance. When constraint decoding and normal decoding give different predications, we only know that the former is probably better than the latter. Although there is no explicit evidence for us to measure how much difference in accuracy between the 764 Partition Sections # of word CTB Training 1 −270 0.47M 400 −931 1001 −1151 Developing 301 −325 6.66K Testing 271 −300 7.82K Table 2: Data partitioning for CTB 5.0. two predications, we can approximate how much new knowledge that a naturally annotated sentence brings. For a sentence x, given the predications of constraint decoding and normal decoding, ˜y and y, the difference of their scores δ = S(y) −S(˜y) indicates the degree to which the current model mistakes. This indicator helps us to select more valuable training examples. The strategy of learning with natural annotations can be adapted to other situations. For example, if we have a list of words or phrases (especially in a specific domain such as medicine and chemical), we can generate annotated sentences automatically by string matching in a large amount of raw text. It probably provides a simple and effective domain adaptation strategy for already trained models. 5 Experiments We use the Penn Chinese Treebank 5.0 (CTB) (Xue et al., 2005) as the existing annotated corpus for Chinese word segmentation. For convenient of comparison with other work in word segmentation, the whole corpus is split into three partitions as follows: chapters 271-300 for testing, chapters 301-325 for developing, and others for training. We choose the Chinese wikipedia 1 (version 20120812) as the external knowledge source, because it has high quality in contents and it is much better than usual web text. Structural informations, including hyperlinks, fonts and colors are used to derive the annotation information. To further evaluate the improvement brought by the fuzzy knowledge in Chinese wikipedia, a series of testing sets from different domains are adopted. The four testing sets from SIGHAN Bakeoff 2010 (Zhao and Liu, 2010) are used, they are drawn from the domains of literature, finance, computer science and medicine. Although the reference sets are annotated according to a different 1http://download.wikimedia.org/backup-index.html. 95.6 95.8 96 96.2 96.4 96.6 96.8 97 97.2 97.4 1 2 3 4 5 6 7 8 9 10 Accuracy (F1%) Training iterations Figure 3: Learning curve of the averaged perceptron classifier on the CTB developing set. word segmentation standard (Yu et al., 2001), the quantity of accuracy improvement is still illustrative since there are no vast diversities between the two segmentation standards. We also annotated another three testing sets 2, their texts are drawn from the domains of chemistry, physics and machinery, and each contains 500 sentences. 5.1 Baseline Classifier for Word Segmentation We train the baseline perceptron classifier for word segmentation on the training set of CTB 5.0, using the developing set to determine the best training iterations. The performance measurement for word segmentation is balanced Fmeasure, F = 2PR/(P + R), a function of precision P and recall R, where P is the percentage of words in segmentation results that are segmented correctly, and R is the percentage of correctly segmented words in the gold standard words. Figure 3 shows the learning curve of the averaged perceptron on the developing set. The second column of Table 3 lists the performance of the baseline classifier on eight testing sets, where newswire denotes the testing set of the CTB itself. The classifier performs much worse on the domains of chemistry, physics and machinery, it indicates the importance of domain adaptation for word segmentation (Gao et al., 2004; Ma and Way, 2009; Gao et al., 2010). The accuracy on the testing sets from SIGHAN Bakeoff 2010 is even lower due to the difference in both domains and word segmentation standards. 2They are available at http://nlp.ict.ac.cn/ jiangwenbin/. 765 Dataset Baseline (F%) Enhanced (F%) Newswire 97.35 98.28 +0.93 Out-of-Domain Chemistry 93.61 95.68 +2.07 Physics 95.10 97.24 +2.14 Machinery 96.08 97.66 +1.58 Literature 92.42 93.53 +1.11 Finance 92.50 93.16 +0.66 Computer 89.46 91.19 +1.73 Medicine 91.88 93.34 +1.46 Average 93.01 94.54 +1.53 Table 3: Performance of the baseline classifier and the classifier enhanced with natural annotations in Chinese wikipedia. 5.2 Classifier Enhanced with Natural Annotations The Chinese wikipedia contains about 0.5 million items. From their description text, about 3.9 millions of sentences with natural annotations are extracted. With the CTB training set as the existing corpus C, about 0.8 million sentences are reserved according to Algorithm 2, the segmentations given by constraint decoding are used as additional training data for the enhanced classifier. According to the previous description, the difference of the scores of constraint decoding and normal decoding, δ = S(y) −S(˜y), indicates the importance of a constraint segmentation to the improvement of the baseline classifier. The constraint segmentations of the reserved sentences are sorted in descending order according to the difference of the scores of constraint decoding and normal decoding, as described previously. From the beginning of the sorted list, different amounts of segmented sentences are used as the additional training data for the enhanced character classifier. Figure 4 shows the performance curve of the enhanced classifiers on the developing set of CTB. We found that the highest accuracy was achieved when 160, 000 sentences were used, while more additional training data did not give continuous improvement. A recent related work about selftraining for segmentation (Liu and Zhang, 2012) also reported a very similar trend, that only a moderate amount of raw data gave the most obvious improvements. The performance of the enhanced classifier is listed in the third column of Table 3. On the CTB testing set, training data from the Chinese 97.1 97.2 97.3 97.4 97.5 97.6 97.7 97.8 Accuracy (F1%) Count of selected sentences 10000 20000 40000 80000 160000 320000 640000 using selected sentences using all sentences Figure 4: Performance curve of the classifier enhanced with selected sentences of different scales. Model Accuracy (F%) (Jiang et al., 2008) 97.85 (Kruengkrai et al., 2009) 97.87 (Zhang and Clark, 2010) 97.79 (Wang et al., 2011) 98.11 (Sun, 2011b) 98.17 Our Work 98.28 Table 4: Comparison with state-of-the-art work in Chinese word segmentation. wikipedia brings an F-measure increment of 0.93 points. On out-of-domain testing sets, the improvements are much larger, an average increment of 1.53 points is achieved on seven domains. It is probably because the distribution of the knowledge in the CTB training data is concentrated in the domain of newswire, while the contents of the Chinese wikipedia cover a broad range of domains, it provides knowledge complementary to that of CTB. Table 4 shows the comparison with other work in Chinese word segmentation. Our model achieves an accuracy higher than that of the state-of-the-art models trained on CTB only, although using a single classifier with only local features. From the viewpoint of resource utilization, the comparison between our system and previous work without using additional training data is unfair. However, we believe this work shows another interesting way to improve Chinese word segmentation, it focuses on the utilization of fuzzy and sparse knowledge on the Internet rather than making full use of a specific humanannotated corpus. On the other hand, since only a single classifier and local features are used in our method, better performance could be achieved 766 resorting to complicated features, system combination and other semi-supervised technologies. What is more, since the text on Internet is widecoveraged and real-time updated, our strategy also helps a word segmenter be more domain adaptive and up to date. 6 Related Work Li and Sun (2009) extracted character classification instances from raw text for Chinese word segmentation, resorting to the indication of punctuation marks between characters. Sun and Xu (Sun and Xu, 2011) utilized the features derived from large-scaled unlabeled text to improve Chinese word segmentation. Although the two work also made use of large-scaled raw text, our method is essentially different from theirs in the aspects of both the source of knowledge and the learning strategy. Lots of efforts have been devoted to semisupervised methods in sequence labeling and word segmentation (Xu et al., 2008; Suzuki and Isozaki, 2008; Haffari and Sarkar, 2008; Tomanek and Hahn, 2009; Wang et al., 2011). A semisupervised method tries to find an optimal hyperplane of both annotated data and raw data, thus to result in a model with better coverage and higher accuracy. Researchers have also investigated unsupervised methods in word segmentation (Zhao and Kit, 2008; Johnson and Goldwater, 2009; Mochihashi et al., 2009; Hewlett and Cohen, 2011). An unsupervised method mines the latent distribution regularity in the raw text, and automatically induces word segmentation knowledge from it. Our method also needs large amounts of external data, but it aims to leverage the knowledge in the fuzzy and sparse annotations. It is fundamentally different from semi-supervised and unsupervised methods in that we aimed to excavate a totally different kind of knowledge, the natural annotations implied by the structural information in web text. In recent years, much work has been devoted to the improvement of word segmentation in a variety of ways. Typical approaches include the introduction of global training or complicated features (Zhang and Clark, 2007; Zhang and Clark, 2010), the investigation of word internal structures (Zhao, 2009; Li, 2011), the adjustment or adaptation of word segmentation standards (Wu, 2003; Gao et al., 2004; Jiang et al., 2009), the integrated solution of segmentation and related tasks such as part-of-speech tagging and parsing (Zhou and Su, 2003; Zhang et al., 2003; Fung et al., 2004; Goldberg and Tsarfaty, 2008), and the strategies of hybrid or stacked modeling (Nakagawa and Uchimoto, 2007; Kruengkrai et al., 2009; Wang et al., 2010; Sun, 2011b). In parsing, Pereira and Schabes (1992) proposed an extended inside-outside algorithm that infers the parameters of a stochastic CFG from a partially parsed treebank. It uses partial bracketing information to improve parsing performance, but it is specific to constituency parsing, and its computational complexity makes it impractical for massive natural annotations in web text. There are also work making use of word co-occurrence statistics collected in raw text or Internet n-grams to improve parsing performance (Nakov and Hearst, 2005; Pitler et al., 2010; Zhou et al., 2011; Bansal and Klein, 2011). When enriching the related work during writing, we found a work on dependency parsing (Spitkovsky et al., 2010) who utilized parsing constraints derived from hypertext annotations to improve the unsupervised dependency grammar induction. Compared with their method, the strategy we proposed is formal and universal, the discriminative learning strategy and the quantitative measurement of fuzzy knowledge enable more effective utilization of the natural annotation on the Internet when adapted to parsing. 7 Conclusion and Future Work This work presents a novel discriminative learning algorithm to utilize the knowledge in the massive natural annotations on the Internet. Natural annotations implied by structural information are used to decrease the searching space of the classifier, then the constraint decoding in the pruned searching space gives predictions not worse than the normal decoding does. Annotation differences between the outputs of constraint decoding and normal decoding are used to train the enhanced classifier, linguistic knowledge in the human-annotated corpus and the natural annotations of web text are thus integrated together. Experiments on Chinese word segmentation show that, the enhanced word segmenter achieves significant improvement on testing sets of different domains, although using a single classifier with only local features. Since the contents of web text cover a broad range of domains, it provides knowledge comple767 mentary to that of human-annotated corpora with concentrated distribution of domains. The content on the Internet is large-scaled and real-time updated, it compensates for the drawback of expensive building and updating of corpora. Our strategy, therefore, enables us to build a classifier more domain adaptive and up to date. In the future, we will compare this method with self-training to better illustrate the importance of boundary information, and give error analysis on what types of errors are reduced by the method to make this investigation more complete. We will also investigate more efficient algorithms to leverage more massive web text with natural annotations, and further extend the strategy to other NLP problems such as named entity recognition and parsing. Acknowledgments The authors were supported by National Natural Science Foundation of China (Contracts 61202216), 863 State Key Project (No. 2011AA01A207), and National Key Technology R&D Program (No. 2012BAH39B03). Qun Liu’s work was partially supported by Science Foundation Ireland (Grant No.07/CE/I1142) as part of the CNGL at Dublin City University. Sincere thanks to the three anonymous reviewers for their thorough reviewing and valuable suggestions! References Mohit Bansal and Dan Klein. 2011. Web-scale features for full-scale parsing. In Proceedings of ACL. Michael Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proceedings of EMNLP, pages 1–8, Philadelphia, USA. Pascale Fung, Grace Ngai, Yongsheng Yang, and Benfeng Chen. 2004. A maximum-entropy chinese parser augmented by transformation-based learning. In Proceedings of TALIP. Jianfeng Gao, Andi Wu, Mu Li, Chang-Ning Huang, Hongqiao Li, Xinsong Xia, and Haowei Qin. 2004. Adaptive chinese word segmentation. In Proceedings of ACL. Jianfeng Gao, Mu Li, Andi Wu, and Chang-Ning Huang. 2005. Chinese word segmentation and named entity recognition: A pragmatic approach. Computational Linguistics. Wenjun Gao, Xipeng Qiu, and Xuanjing Huang. 2010. Adaptive chinese word segmentation with online passive-aggressive algorithm. In Proceedings of CIPS-SIGHAN Workshop. Yoav Goldberg and Reut Tsarfaty. 2008. A single generative model for joint morphological segmentation and syntactic parsing. In Proceedings of ACL-HLT. Gholamreza Haffari and Anoop Sarkar. 2008. Homotopy-based semi-supervised hidden markov models for sequence labeling. In Proceedings of COLING. Daniel Hewlett and Paul Cohen. 2011. Fully unsupervised word segmentation with bve and mdl. In Proceedings of ACL. Wenbin Jiang, Liang Huang, Yajuan Lv, and Qun Liu. 2008. A cascaded linear model for joint chinese word segmentation and part-of-speech tagging. In Proceedings of ACL. Wenbin Jiang, Liang Huang, and Qun Liu. 2009. Automatic adaptation of annotation standards: Chinese word segmentation and pos tagging–a case study. In Proceedings of the 47th ACL. Mark Johnson and Sharon Goldwater. 2009. Improving nonparameteric bayesian inference: experiments on unsupervised word segmentation with adaptor grammars. In Proceedings of NAACL. Canasai Kruengkrai, Kiyotaka Uchimoto, Jun.ichi Kazama, Yiou Wang, Kentaro Torisawa, and Hitoshi Isahara. 2009. An error-driven word-character hybrid model for joint chinese word segmentation and pos tagging. In Proceedings of ACL-IJCNLP. Zhongguo Li and Maosong Sun. 2009. Punctuation as implicit annotations for chinese word segmentation. Computational Linguistics. Zhongguo Li. 2011. Parsing the internal structure of words: A new paradigm for chinese word segmentation. In Proceedings of ACL. Yang Liu and Yue Zhang. 2012. Unsupervised domain adaptation for joint segmentation and pos-tagging. In Proceedings of COLING. Yanjun Ma and Andy Way. 2009. Bilingually motivated domain-adapted word segmentation for statistical machine translation. In Proceedings of EACL. David McClosky, Eugene Charniak, and Mark Johnson. 2006. Effective self-training for parsing. In Proceedings of the HLT-NAACL. Daichi Mochihashi, Takeshi Yamada, and Naonori Ueda. 2009. Bayesian unsupervised word segmentation with nested pitman-yor language modeling. In Proceedings of ACL-IJCNLP. Tetsuji Nakagawa and Kiyotaka Uchimoto. 2007. A hybrid approach to word segmentation and pos tagging. In Proceedings of ACL. Preslav Nakov and Marti Hearst. 2005. Using the web as an implicit training set: Application to structural ambiguity resolution. In Proceedings of HLTEMNLP. 768 Hwee Tou Ng and Jin Kiat Low. 2004. Chinese partof-speech tagging: One-at-a-time or all-at-once? word-based or character-based? In Proceedings of EMNLP. Fernando Pereira and Yves Schabes. 1992. Insideoutside reestimation from partially bracketed corpora. In Proceedings of ACL. Emily Pitler, Shane Bergsma, Dekang Lin, and Kenneth Church. 2010. Using web-scale n-grams to improve base np parsing performance. In Proceedings of COLING. Valentin I. Spitkovsky, Daniel Jurafsky, and Hiyan Alshawi. 2010. Profiting from mark-up: Hyper-text annotations for guided parsing. In Proceedings of ACL. Weiwei Sun and Jia Xu. 2011. Enhancing chinese word segmentation using unlabeled data. In Proceedings of EMNLP. Maosong Sun. 2011a. Natural language processing based on naturally annotated web resources. CHINESE INFORMATION PROCESSING. Weiwei Sun. 2011b. A stacked sub-word model for joint chinese word segmentation and part-of-speech tagging. In Proceedings of ACL. Jun Suzuki and Hideki Isozaki. 2008. Semi-supervised sequential labeling and segmentation using gigaword scale unlabeled data. In Proceedings of ACL. Katrin Tomanek and Udo Hahn. 2009. Semisupervised active learning for sequence labeling. In Proceedings of ACL. Kun Wang, Chengqing Zong, and Keh-Yih Su. 2010. A character-based joint model for chinese word segmentation. In Proceedings of COLING. Yiou Wang, Jun’ichi Kazama, Yoshimasa Tsuruoka, Wenliang Chen, Yujie Zhang, and Kentaro Torisawa. 2011. Improving chinese word segmentation and pos tagging with semi-supervised methods using large auto-analyzed data. In Proceedings of IJCNLP. Andi Wu. 2003. Customizable segmentation of morphologically derived words in chinese. Computational Linguistics and Chinese Language Processing. Jia Xu, Jianfeng Gao, Kristina Toutanova, and Hermann Ney. 2008. Bayesian semi-supervised chinese word segmentation for statistical machine translation. In Proceedings of COLING. Nianwen Xue and Libin Shen. 2003. Chinese word segmentation as lmr tagging. In Proceedings of SIGHAN Workshop. Nianwen Xue, Fei Xia, Fu-Dong Chiou, and Martha Palmer. 2005. The penn chinese treebank: Phrase structure annotation of a large corpus. In Natural Language Engineering. Shiwen Yu, Jianming Lu, Xuefeng Zhu, Huiming Duan, Shiyong Kang, Honglin Sun, Hui Wang, Qiang Zhao, and Weidong Zhan. 2001. Processing norms of modern chinese corpus. Technical report. Yue Zhang and Stephen Clark. 2007. Chinese segmentation with a word-based perceptron algorithm. In Proceedings of ACL 2007. Yue Zhang and Stephen Clark. 2010. A fast decoder for joint word segmentation and pos-tagging using a single discriminative model. In Proceedings of EMNLP. Huaping Zhang, Hongkui Yu, Deyi Xiong, and Qun Liu. 2003. Hhmm-based chinese lexical analyzer ictclas. In Proceedings of SIGHAN Workshop. Hai Zhao and Chunyu Kit. 2008. Unsupervised segmentation helps supervised learning of character tagging for word segmentation and named entity recognition. In Proceedings of SIGHAN Workshop. Hongmei Zhao and Qun Liu. 2010. The cips-sighan clp 2010 chinese word segmentation bakeoff. In Proceedings of CIPS-SIGHAN Workshop. Hai Zhao. 2009. Character-level dependencies in chinese: Usefulness and learning. In Proceedings of EACL. Guodong Zhou and Jian Su. 2003. A chinese efficient analyser integrating word segmentation, partofspeech tagging, partial parsing and full parsing. In Proceedings of SIGHAN Workshop. Guangyou Zhou, Jun Zhao, Kang Liu, and Li Cai. 2011. Exploiting web-derived selectional preference to improve statistical dependency parsing. In Proceedings of ACL. 769
2013
75
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 770–779, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Graph-based Semi-Supervised Model for Joint Chinese Word Segmentation and Part-of-Speech Tagging Xiaodong Zeng† Derek F. Wong† Lidia S. Chao† Isabel Trancoso‡ †Department of Computer and Information Science, University of Macau ‡INESC-ID / Instituto Superior T´ecnico, Lisboa, Portugal [email protected], {derekfw, lidiasc}@umac.mo, [email protected] Abstract This paper introduces a graph-based semisupervised joint model of Chinese word segmentation and part-of-speech tagging. The proposed approach is based on a graph-based label propagation technique. One constructs a nearest-neighbor similarity graph over all trigrams of labeled and unlabeled data for propagating syntactic information, i.e., label distributions. The derived label distributions are regarded as virtual evidences to regularize the learning of linear conditional random fields (CRFs) on unlabeled data. An inductive character-based joint model is obtained eventually. Empirical results on Chinese tree bank (CTB-7) and Microsoft Research corpora (MSR) reveal that the proposed model can yield better results than the supervised baselines and other competitive semi-supervised CRFs in this task. 1 Introduction Word segmentation and part-of-speech (POS) tagging are two critical and necessary initial procedures with respect to the majority of high-level Chinese language processing tasks such as syntax parsing, information extraction and machine translation. The traditional way of segmentation and tagging is performed in a pipeline approach, first segmenting a sentence into words, and then assigning each word a POS tag. The pipeline approach is very simple to implement, but frequently causes error propagation, given that wrong segmentations in the earlier stage harm the subsequent POS tagging (Ng and Low, 2004). The joint approaches of word segmentation and POS tagging (joint S&T) are proposed to resolve these two tasks simultaneously. They effectively alleviate the error propagation, because segmentation and tagging have strong interaction, given that most segmentation ambiguities cannot be resolved without considering the surrounding grammatical constructions encoded in a POS sequence (Qian and Liu, 2012). In the past years, several proposed supervised joint models (Ng and Low, 2004; Zhang and Clark, 2008; Jiang et al., 2009; Zhang and Clark, 2010) achieved reasonably accurate results, but the outstanding problem among these models is that they rely heavily on a large amount of labeled data, i.e., segmented texts with POS tags. However, the production of such labeled data is extremely timeconsuming and expensive (Jiao et al., 2006; Jiang et al., 2009). Therefore, semi-supervised joint S&T appears to be a natural solution for easily incorporating accessible unlabeled data to improve the joint S&T model. This study focuses on using a graph-based label propagation method to build a semi-supervised joint S&T model. Graph-based label propagation methods have recently shown they can outperform the state-of-the-art in several natural language processing (NLP) tasks, e.g., POS tagging (Subramanya et al., 2010), knowledge acquisition (Talukdar et al., 2008), shallow semantic parsing for unknown predicate (Das and Smith, 2011). As far as we know, however, these methods have not yet been applied to resolve the problem of joint Chinese word segmentation (CWS) and POS tagging. Motivated by the works in (Subramanya et al., 2010; Das and Smith, 2011), for structured problems, graph-based label propagation can be employed to infer valuable syntactic information (ngram-level label distributions) from labeled data to unlabeled data. This study extends this intuition to construct a similarity graph for propagating trigram-level label distributions. The derived label distributions are regarded as prior knowledge to regularize the learning of a sequential model, conditional random fields (CRFs) in this case, on both 770 labeled and unlabeled data to achieve the semisupervised learning. The approach performs the incorporation of the derived labeled distributions by manipulating a “virtual evidence” function as described in (Li, 2009). Experiments on the data from the Chinese tree bank (CTB-7) and Microsoft Research (MSR) show that the proposed model results in significant improvement over other comparative candidates in terms of F-score and out-of-vocabulary (OOV) recall. This paper is structured as follows: Section 2 points out the main differences with the related work of this study. Section 3 reviews the background, including supervised character-based joint S&T model based on CRFs and graph-based label propagation. Section 4 presents the details of the proposed approach. Section 5 reports the experiment results. The conclusion is drawn in Section 6. 2 Related Work Prior supervised joint S&T models present approximate 0.2% - 1.3% improvement in F-score over supervised pipeline ones. The state-of-theart joint models include reranking approaches (Shi and Wang, 2007), hybrid approaches (Nakagawa and Uchimoto, 2007; Jiang et al., 2008; Sun, 2011), and single-model approaches (Ng and Low, 2004; Zhang and Clark, 2008; Kruengkrai et al., 2009; Zhang and Clark, 2010). The proposed approach in this paper belongs to the single-model type. There are few explorations of semi-supervised approaches for CWS or POS tagging in previous works. Xu et al. (2008) described a Bayesian semi-supervised CWS model by considering the segmentation as the hidden variable in machine translation. Unlike this model, the proposed approach is targeted at a general model, instead of one oriented to machine translation task. Sun and Xu (2011) enhanced a CWS model by interpolating statistical features of unlabeled data into the CRFs model. Wang et al. (2011) proposed a semisupervised pipeline S&T model by incorporating n-gram and lexicon features derived from unlabeled data. Different from their concern, our emphasis is to learn the semi-supervised model by injecting the label information from a similarity graph constructed from labeled and unlabeled data. The induction method of the proposed approach also differs from other semi-supervised CRFs algorithms. Jiao et al. (2006), extended by Mann and McCallum (2007), reported a semi-supervised CRFs model which aims to guide the learning by minimizing the conditional entropy of unlabeled data. The proposed approach regularizes the CRFs by the graph information. Subramanya et al. (2010) proposed a graph-based self-train style semi-supervised CRFs algorithm. In the proposed approach, an analogous way of graph construction intuition is applied. But overall, our approach differs in three important aspects: first, novel feature templates are defined for measuring the similarity between vertices. Second, the critical property, i.e., sparsity, is considered among label propagation. And third, the derived label information from the graph is smoothed into the model by optimizing a modified objective function. 3 Background 3.1 Supervised Character-based Model The character-based joint S&T approach is operated as a sequence labeling fashion that each Chinese character, i.e., hanzi, in the sequence is assigned with a tag. To perform segmentation and tagging simultaneously in a uniform framework, according to Ng and Low (2004), the tag is composed of a word boundary part, and a POS part, e.g., “B NN” refers to the first character in a word with POS tag “NN”. In this paper, 4 word boundary tags are employed: B (beginning of a word), M (middle part of a word), E (end of a word) and S (single character). As for the POS tag, we shall use the 33 tags in the Chinese tree bank. Thus, the potential composite tags of joint S&T consist of 132 (4×33) classes. The first-order CRFs model (Lafferty et al., 2001) has been the most common one in this task. Given a set of labeled examples Dl = {(xi, yi)}l i=1, where xi = x1 i x2 i ...xN i is the sequence of characters in the ith sentence, and yi = y1 i y2 i ...yN i is the corresponding label sequence. The goal is to learn a CRFs model in the form, p(yi|xi; Λ) = 1 Z(xi; Λ) exp{ N X j=1 K X k=1 λkfk(yj−1 i , yj i , xi, j)} (1) where Z(xi; Λ) is the partition function that normalizes the exponential form to be a probability distribution, and fk(yj−1 i , yj i , xi, j). In this study, 771 the baseline feature templates of joint S&T are the ones used in (Ng and Low, 2004; Jiang et al., 2008), as shown in Table 1. Λ = {λ1λ2...λK} ∈ RK are the weight parameters to be learned. In supervised training, the aim is to estimate the Λ that maximizes the conditional likelihood of the training data while regularizing model parameters: L(Λ) = l X i=1 log p(yi|xi; Λ) −R(Λ) (2) R(Λ) can be any standard regularizer on parameters, e.g., R(Λ) =∥Λ ∥/2δ2, to limit overfitting on rare features and avoid degeneracy in the case of correlated features. This objective function can be optimized by the stochastic gradient method or other numerical optimization methods. Type Font Size Unigram Cn(n = −2, −1, 0, 1, 2) Bigram CnCn+1(n = −2, −1, 0, 1) Date, Digit and Alphabetic Letter T(C−2)T(C−1)T(C0) T(C1)T(C2) Table 1: The feature templates of joint S&T. 3.2 Graph-based Label Propagation Graph-based label propagation, a critical subclass of semi-supervised learning (SSL), has been widely used and shown to outperform other SSL methods (Chapelle et al., 2006). Most of these algorithms are transductive in nature, so they cannot be used to predict an unseen test example in the future (Belkin et al., 2006). Typically, graph-based label propagation algorithms are run in two main steps: graph construction and label propagation. The graph construction provides a natural way to represent data in a variety of target domains. One constructs a graph whose vertices consist of labeled and unlabeled examples. Pairs of vertices are connected by weighted edges which encode the degree to which they are expected to have the same label (Zhu et al., 2003). Popular graph construction methods include k-nearest neighbors (kNN) (Bentley, 1980; Beygelzimer et al., 2006), b-matching (Jebara et al., 2009) and local reconstruction (Daitch et al., 2009). Label propagation operates on the constructed graph. The primary objective is to propagate labels from a few labeled vertices to the entire graph by optimizing a loss function based on the constraints or properties derived from the graph, e.g., smoothness (Zhu et al., 2003; Subramanya et al., 2010; Talukdar et al., 2008), or sparsity (Das and Smith, 2012). State-of-the-art label propagation algorithms include LP-ZGL (Zhu et al., 2003), Adsorption (Baluja et al., 2008), MAD (Talukdar and Crammer, 2009) and Sparse Inducing Penalties (Das and Smith, 2012). 4 Method The emphasis of this work is on building a joint S&T model based on two different kinds of data sources, labeled and unlabeled data. In essence, this learning problem can be treated as incorporating certain gainful information, e.g., prior knowledge or label constraints, of unlabeled data into the supervised model. The proposed approach employs a transductive graph-based label propagation method to acquire such gainful information, i.e., label distributions from a similarity graph constructed over labeled and unlabeled data. Then, the derived label distributions are injected as virtual evidences for guiding the learning of CRFs. Algorithm 1 semi-supervised joint S&T induction Input: Dl = {(xi, yi)}l i=1 labeled sentences Du = {(xi)}l+u i=l+1 unlabeled sentences Output: Λ: a set of feature weights 1: Begin 2: {G} = construct graph (Dl, Du) 3: {q0} = init labelDist ({G}) 4: {q} = propagate label ({G}, {q0}) 5: {Λ} = train crf (Dl ∪Du, {q}) 6: End The model induction includes the following steps (see Algorithm 1): firstly, given labeled and unlabeled data, i.e., Dl = {(xi, yi)}l i=1 with l labeled sentences and Du = {(xi)}l+u i=l+1 with u unlabeled sentences, a specific similarity graph G representing Dl and Du is constructed (construct graph). The vertices (Section 4.1) in the constructed graph consist of all trigrams that occur in labeled and unlabeled sentences, and edge weights between vertices are computed using the cosine distance between pointwise mutual information (PMI) statistics. Afterwards, the estimated label distributions q0 of vertices in the graph G are randomly initialized (init labelDist). Subsequently, 772 the label propagation procedure (propagate label) is conducted for projecting label distributions q from labeled vertices to the entire graph, using the algorithm of Sparse-Inducing Penalties (Das and Smith, 2012) (Section 4.2). The final step (train crf) of the induction is incorporating the inferred trigram-level label distributions q into CRFs model (Section 4.3). 4.1 Graph Construction In most graph-based label propagation tasks, the final effect depends heavily on the quality of the graph. Graph construction thus plays a central role in graph-based label propagation (Zhu et al., 2003). For character-based joint S&T, unlike the unstructured learning problem whose vertices are formed directly by labeled and unlabeled instances, the graph construction is non-trivial. Das and Petrov (2011) mentioned that taking individual characters as the vertices would result in various ambiguities, whereas the similarity measurement is still challenging if vertices corresponding to entire sentences. This study follows the intuitions of graph construction from Subramanya et al. (2010) in which vertices are represented by character trigrams occurring in labeled and unlabeled sentences. Formally, given a set of labeled sentences Dl, and unlabeled ones Du, where D ≜{Dl, Du}, the goal is to form an undirected weighted graph G = (V, E), where V is defined as the set of vertices which covers all trigrams extracted from Dl and Du. Here, V = Vl ∪Vu, where Vl refers to trigrams that occurs at least once in labeled sentences and Vu refers to trigrams that occur only in unlabeled sentences. The edges E ∈Vl × Vu, connect all the vertices. This study makes use of a symmetric k-NN graph (k = 5) and the edge weights are measured by a symmetric similarity function (Equation (3)): wi,j = ( sim(xi, xj) if j ∈K(i) or i ∈K(j) 0 otherwise (3) where K(i) is the set of the k nearest neighbors of xi(|K(i) = k, ∀i|) and sim(xi, xj) is a similarity measure between two vertices. The similarity is computed based on the co-occurrence statistics over the features in Table 2. Most features we adopted are selected from those of (Subramanya et al., 2010). Note that a novel feature in the last row encodes the classes of surrounding characters, where four types are defined: number, punctuation, alphabetic letter and other. It is especially helpful for the graph to make connections with trigrams that may not have been seen in labeled data but have similar label information. The pointwise mutual information values between the trigrams and each feature instantiation that they have in common are summed to sparse vectors, and their cosine distances are computed as the similarities. Description Feature Trigram + Context x1x2x3x4x5 Trigram x2x3x4 Left Context x1x2 Right Context x4x5 Center Word x3 Trigram - Center Word x2x4 Left Word + Right Context x2x4x5 Right Word + Left Context x1x2x3 Type of Trigram: number, punctuation, alphabetic letter and other t(x2)t(x3)t(x4) Table 2: Features employed to measure the similarity between two vertices, in a given text “x1x2x3x4x5”, where the trigram is “x2x3x4”. The nature of the similarity graph enforces that the connected trigrams with high weight appearing in different texts should have similar syntax configurations. Thus, the constructed graph is expected to provide additional information that cannot be expressed directly in a sequence model (Subramanya et al., 2010). One primary benefit of this property is on enriching vocabulary coverage. In other words, the new features of various trigrams only occurring in unlabeled data can be discovered. As the excerpt in Figure 1 shows, the trigram “天津港” (Tianjin port) has no any label information, as it only occurs in unlabeled data, but fortunately its neighborhoods with similar syntax information, e.g., “上海港” (Shanghai port), “广州 港” (Guangzhou port), can assist to infer the correct tag “M NN”. 4.2 Label Propagation In order to induce trigram-level label distributions from the graph constructed by the previous step, a label propagation algorithm, Sparsity-Inducing Penalties, proposed by Das and Smith (2012), is employed. This algorithm is used because it captures the property of sparsity that only a few labels 773 Figure 1: An excerpt from the similarity graph over trigrams on labeled and unlabeled data. are typically associated with a given instance. In fact, the sparsity is also a common phenomenon among character-based CWS and POS tagging. The following convex objective is optimized on the similarity graph in this case: argmin q l X j=1 ∥qj −rj ∥2 +µ l+u X i=1,k∈N(i) wik ∥qi −qk ∥2 +λ l+u X i=1 ∥qi ∥2 s.t. qi ≥0, ∀i ∈V (4) where rj denotes empirical label distributions of labeled vertices, and qi denotes unnormalized estimate measures in every vertex. The wik refers to the similarity between the ith trigram and the kth trigram, and N(i) is a set of neighbors of the ith trigram. µ and λ are two hyperparameters whose values are discussed in Section 5. The squaredloss criterion1 is used to formulate the objective function. The first term in Equation (4) is the seed match loss which penalizes the estimated label distributions qj, if they go too far away from the empirical labeled distributions rj. The second term is the edge smoothness loss that requires qi should be smooth with respect to the graph, such that two vertices connected by an edge with high weight should be assigned similar labels. The final term is a regularizer to incorporate the prior knowledge, e.g., uniform distributions used in (Talukdar et al., 2008; Das and Smith, 2011). This study applies the squared norm of q to encourage sparsity per vertex. Note that the estimated label distribution 1It can be seen as a multi-class extension of quadratic cost criterion (Bengio et al., 2006) or as a variant of the objective in (Zhu et al., 2003). An entropic distance measure could also be used, e.g., KL-divergence (Subramanya et al., 2010; Das and Smith, 2012). qi in Equation (4) is relaxed to be unnormalized, which simplifies the optimization. Thus, the objective function can be optimized by L-BFGS-B (Zhu et al., 1997), a generic quasi-Newton gradientbased optimizer. The partial derivatives of Equation (4) are computed for each parameter of q and then passed on to the optimizer that updates them such that Equation (4) is maximized. 4.3 Semi-Supervised CRFs Training The trigram-level label distributions inferred in the propagation step can be viewed as a kind of valuable “prior knowledge” to regularize the learning on unlabeled data. The final step of the induction is thus to incorporate such prior knowledge into CRFs. Li (2009) generalizes the use of virtual evidence to undirected graphical models and, in particular, to CRFs for incorporating external knowledge. By extending the similar intuition, as illustrated in Figure 2, we modify the structure of a regular linear-chain CRFs on unlabeled data for smoothing the derived label distributions, where virtual evidences, i.e., q in our case, are donated by {v1, v2, . . . , vT }, in parallel with the state variables {y1, y2, . . . , yT }. The modified CRFs model allows us to flexibly define the interaction between estimated state values and virtual evidences by potential functions. Therefore, given labeled and unlabeled data, the learning objective is defined as follows: L(Λ) + l+u X i=l+1 Ep(yi|xi,vi;Λg)[log p(yi, vi|xi; Λ)] (5) where the conditional probability in the second term is denoted as p(yi, vi|xi; Λ) = 1 Z′(xi; Λ) exp{ N X j=1 K X k=1 λkfk(yj−1 i , yj i , xi, j) +α N X t=1 s(yt i, vt i)} (6) The first term in Equation (5) is the same as Equation (2), which is the traditional CRFs learning objective function on the labeled data. The second term is the expected conditional likelihood of unlabeled data. It is directed to maximize the conditional likelihood of hidden states with the derived label distributions on unlabeled data, i.e., p(y, v|x), where y and v are jointly modeled but 774 the probability is still conditional on x. Here, Z′(x; Λ) is the partition function of normalization that is achieved by summing the numerator over both y and v. A virtual evidence feature function of s(yt i, vt i) with pre-defined weight α is defined to regularize the conditional distributions of states over the derived label distributions. The learning is impacted by the derived label distributions as Equation (7): firstly, if the trigram xt−1 i xt ixt+1 i at current position does have no corresponding derived label distributions (vt i = null), the value of zero is assigned to all state hypotheses so that the posteriors would not affected by the derived information. Secondly, if it does have a derived label distribution, since the virtual evidence in this case is a distribution instead of a specific label, the label probability in the distribution under the current state hypothesis is assigned. This means that the values of state variables are constrained to agree with the derived distributions. s(yt i, vt i) = ( qxt−1 i xt ixt+1 i (yt i) if vt i ̸= null 0 else (7) The second term in Equation (5) can be optimized by using the expectation maximization (EM) algorithm in the same fashion as in the generative approach, following (Li, 2009). One can iteratively optimize the Q function Q(Λ) = P y p(yi|xi; Λg) log p(yi, vi|xi; Λ), in which Λg is the model estimated from the previous iteration. Here the gradient of the Q function can be measured by: ∂Q(Λ) ∂Λk = X t X yt−1 i ,yt i fk(yt−1 i , yt i, xi, t). (p(yt−1 i , yt i|xi, vi; Λ) −p(yt−1 i , yt i|xi; Λ)) (8) The forward-backward algorithm is used to measure p(yt−1 i , yt i|xi, vi; Λ) and p(yt−1 i , yt i|xi; Λ). Thus, the objective function Equation (5) is optimized as follows: for the instances i = 1, 2, ..., l, the parameters Λ are learned as the supervised manner; for the instances i = l +1, l +2, ..., u+l, in the E-step, the expected value of Q function is computed, based on the current model Λg. In the M-step, the posteriors are fixed and updated Λ that maximizes Equation (5). Figure 2: Modified linear-chain CRFs integrating virtual evidences on unlabeled data. 5 Experiment 5.1 Setting The experimental data are mainly taken from the Chinese tree bank (CTB-7) and Microsoft Research (MSR)2. CTB-7 consists of over one million words of annotated and parsed text from Chinese newswire, magazine news, various broadcast news and broadcast conversation programs, web newsgroups and weblogs. It is a segmented, POS tagged3 and fully bracketed corpus. The train, development and test sets4 from CTB-7 and their corresponding statistics are reported in Table 3. To satisfy the characteristic of the semi-supervised learning problem, the train set, i.e., the labeled data, is formed by a relatively small amount of annotated texts sampled from CTB-7. For the unlabeled data in this experiment, a greater amount of texts is extracted from CTB-7 and MSR, which contains 53,108 sentences with 2,418,690 characters. The performance measurement indicators for word segmentation and POS tagging (joint S&T) are balance F-score, F = 2PR/(P+R), the harmonic mean of precision (P) and recall (R), and outof-vocabulary recall (OOV-R). For segmentation, a token is regarded to be correct if its boundaries match the ones of a word in the gold standard. For the POS tagging, it is correct only if both the boundaries and the POS tags are perfect matches. The experimental platform is implemented based on two toolkits: Mallet (McCallum and Kachites, 2002) and Junto (Talukdar and Pereira, 2010). Mallet is a java-based package for statistical natural language processing, which includes the CRFs implementation. Junto is a graph2It can be download at: www.sighan.org/bakeoff2005. 3There is a total of 33 POS tags in CTB-7. 4The extracted sentences in train, development and test set were assigned with the composite tags as described in Section 3.1. 775 based label propagation toolkit that provides several state-of-the-art algorithms. Data #Sent #Word #Char #OOV Train 17,968 374,697 596,360 Develop 1,659 46,637 79,283 0.074 Test 2,037 65,219 104,502 0.089 Table 3: Training, development and testing data. 5.2 Baseline and Proposed Models In the experiment, the baseline supervised pipeline and joint S&T models are built only on the train data. The proposed model will also be compared with the semi-supervised pipeline S&T model described in (Wang et al., 2011). In addition, two state-of-the-art semi-supervised CRFs algorithms, Jiao’s CRFs (Jiao et al., 2006) and Subramanya’s CRFs (Subramanya et al., 2010), are also used to build joint S&T models. The corresponding settings of the above candidates are listed below: • Baseline I: a supervised CRFs pipeline S&T model. The feature templates are from Zhao et al. (2006) and Wu et al. (2008). • Wang’s model: a semi-supervised CRFs pipeline S&T model. The same feature templates in (Wang et al., 2011) are used, i.e., “+n-gram+cluster+lexicon”. • Baseline II: a supervised CRFs joint S&T model. The feature templates introduced in Section 3.1 are used. • Jiao’s model: a semi-supervised CRFs joint S&T model trained using the entropy regularization (ER) criteria (Jiao et al., 2006). The optimization method proposed by Mann and McCallum (2007) is applied. • Subramanya’s model: a self-train style semi-supervised CRFs joint S&T model based on the same parameters used in (Subramanya et al., 2010). • Our model: several parameters in our model are needed to tune based on the development set, e.g., µ, λ and α. In all the CRFs models above, the Gaussian regularizer and stochastic gradient descent method are employed. 5.3 Main Results This experiment yielded a similarity graph that consists of 462,962 trigrams from labeled and unlabeled data. The majority (317,677 trigrams) occurred only in unlabeled data. Based on the development data, the hyperparameters of our model were tuned among the following settings: for the graph propagation, µ ∈{0.2, 0.5, 0.8} and λ ∈{0.1, 0.3, 0.5, 0.8}; for the CRFs training, α ∈{0.1, 0.3, 0.5, 0.7, 0.9}. The best performed joint settings are µ = 0.5, λ = 0.3 and α = 0.7. With the chosen set of hyperparameters, the test data was used to measure the final performance. Model Segmentation POS Tagging F1 OOV-R F1 OOV-R Baseline I 94.27 60.12 91.08 51.72 Wang’s 95.17 63.10 91.64 53.29 Baseline II 95.14 61.52 91.61 52.29 Jiao’s 95.58 63.05 92.11 53.27 Subramanya’s 96.30 67.12 92.46 57.15 Our model 96.85 68.09 92.89 58.36 Table 4: The performance of segmentation and POS tagging on testing data. Table 4 summarizes the performance of segmentation and POS tagging on the test data, in comparison with the other five models. Firstly, as expected, for the two supervised baselines, the joint model outperforms the pipeline one, especially on segmentation. It obtains 0.92% and 2.32% increase in terms of F-score and OOV-R respectively. This outcome verifies the commonly accepted fact that the joint model can substantially improve the pipeline one, since POS tags provide additional information to word segmentation (Ng and Low, 2004). Secondly, it is also noticed that all four semi-supervised models are able to benefit from unlabeled data and greatly improve the results with respect to the baselines. On the whole, for segmentation, they achieve average improvements of 1.02% and 6.8% in F-score and OOV-R; whereas for POS tagging, the average increments of F-sore and OOV-R are 0.87% and 6.45%. An interesting phenomenon is found among the comparisons with baselines that the supervised joint model (Baseline II) is even competitive with semisupervised pipeline one (Wang et al., 2011). This illustrates the effects of error propagation in the pipeline approach. Thirdly, in what concerns the semi-supervised approaches, the three joint S&T models, i.e., Jiao’s, Subramanya’s and our model, are superior to the pipeline model, i.e., Wang’s 776 model. Moreover, the two graph-based approaches, i.e., Subramanya’s and our model, outperform the others. Most importantly, the boldface numbers in the last row illustrate that our model does achieve the best performance. Overall, for word segmentation, it obtains average improvements of 1.43% and 8.09% in F-score and OOV-R over others; for POS tagging, it achieves average improvements of 1.09% and 7.73%. 0 10,000 20,000 30,000 40,000 50,000 94.0 94.5 95.0 95.5 96.0 96.5 97.0 97.5 W ang's Jiao's Subramanya's Our F-score Number of unlabeled sentences 0 10,000 20,000 30,000 40,000 50,000 91.0 91.5 92.0 92.5 93.0 93.5 W ang's Jiao's Subramanya's Our F-score Number of unlabeled sentences 0 10,000 20,000 30,000 40,000 50,000 60.0 62.5 65.0 67.5 70.0 W ang's Jiao's Subramanya's Our OOV-R Number of unlabeled sentences 0 10,000 20,000 30,000 40,000 50,000 51.0 52.5 54.0 55.5 57.0 58.5 W ang's Jiao's Subramanya's Our OOV-R Number of unlabeled sentences Figure 3: The learning curves of semi-supervised models on unlabeled data, where left graphs are segmentation and the right ones are tagging. 5.4 Learning Curve An additional experiment was conducted to investigate the impact of unlabeled data for the four semi-supervised models. Figure 3 illustrates the curves of F-score and OOV-R for segmentation and tagging respectively, as the unlabeled data size is progressively increased in steps of 6,000 sentences. It can be clearly observed that all curves of our model are able to mount up steadily and achieve better gains over others consistently. The most competitive performance of the other three candidates is achieved by Subramanya’s model. This strongly reveals that the knowledge derived from the similarity graph does effectively strengthen the model. But in Subramanya’s model, when the unlabeled size ascends to approximately 30,000 sentences the curves become nearly asymptotic. The semi-supervised pipeline model, Wang’s model, presents a much slower growth on all curves over the others and also begins to overfit with large unlabeled data sizes (>25,000 sentences). The figure also shows an erratic fluctuation of Jiao’s model. Since this approach aims at minimizing conditional entropy over unlabeled data and encourages finding putative labelings for unlabeled data, it results in a data-sensitive model (Li et al., 2009). 5.5 Analysis & Discussion A statistical analysis of the segmentation and tagging results of the supervised joint model (Baseline II) and our model is carried out to comprehend the influence of the graph-based semi-supervised behavior. For word segmentation, the most significant improvement of our model is mainly concentrated on two kinds of words which are known for their difficulties in terms of CWS: a) named entities (NE), e.g., “天津港” (Tianjin port) and “保税 区” (free tax zone); and b) Chinese numbers (CN), e.g., “八点五亿” (eight hundred and fifty million) and “百分之七十二” (seventy two percent). Very often, these words do not exist in the labeled data, so the supervised model is hard to learn their features. Part of these words, however, may occur in the unlabeled data. The proposed semi-supervised approach is able to discover their label information with the help of a similarity graph. Specifically, it learns the label distributions from similar words (neighborhoods), e.g., “上海港” (Shanghai port), “保护区” (protection zone), “九点七亿” (nine hundred and seventy million). The statistics in Table 5 demonstrate significant error reductions of 50.44% and 48.74% on test data, corresponding to NE and CN respectively. Type #word #baErr #gbErr ErrDec% NE 471 226 112 50.44 CN 181 119 61 48.74 Table 5: The statistics of segmentation error for named entities (NE) and Chinese numbers (CN) in test data. #baErr and #gbErr denote the count of segmentations by Baseline II and our model; ErrDec% denotes the error reduction. On the other hand, to better understand the tagging results, we summarize the increase and decrease of the top five common tagging error patterns of our model over Baseline II for the correctly segmented words, as shown in Table 6. The error pattern is defined by “A→B” that refers the true tag of “A” is annotated by a tag of “B”. The obvious improvement brought by our model occurs with the tags “NN”, “CD”, “NR”, “JJ” and “NR”, where errors are reduced 60.74% on aver777 Pattern #baErr ↓ Pattern #baErr ↑ NN→VV 58 38 NN→NR 13 6 CD→NN 41 27 IJ→ON 9 5 NR→VV 29 17 VV→NN 4 3 JJ→NN 18 11 NR→NN 1 3 NR→VA 19 10 JJ→AD 1 2 Table 6: The statistics of POS tagging error patterns in test data. #baErr denote the count of tagging error by Baseline II, while ↓and ↑denotes the number of error reduced or increased by our model. age. More impressively, there is a large portion of fixed error pattern instances stemming from OOV words. Meanwhile, it is also observed that the disambiguation of error patterns in the right portion of the table slightly suffers from our approach. In reality, it is impossible and unrealistic to request a model to be “no harms but only benefits” under whatever circumstances. 6 Conclusion This study introduces a novel semi-supervised approach for joint Chinese word segmentation and POS tagging. The approach performs the semisupervised learning in the way that the trigramlevel distributions inferred from a similarity graph are used to regularize the learning of CRFs model on labeled and unlabeled data. The empirical results indicate that the similarity graph information and the incorporation manner of virtual evidences present a positive effect to the model induction. Acknowledgments The authors are grateful to the Science and Technology Development Fund of Macau and the Research Committee of the University of Macau for the funding support for our research, under the reference No. 017/2009/A and RG060/0910S/CS/FST. The authors also wish to thank the anonymous reviewers for many helpful comments. References Shumeet Baluja, Rohan Seth, D. Sivakumar, Yushi Jing, Jay Yagnik, Shankar Kumar, Deepak Ravich, and Mohamed Aly. 2008. Video suggestion and discovery for youtube: taking random walks through the view graph. In Proceedings of WWW, pages 895904, Beijing, China. Mikhail Belkin, Partha Niyogi, and Vikas Sindhwani. 2006. Manifold regularization. Journal of machine learning research, 7:2399–2434. Yoshua Bengio, Olivier Delalleau, and Nicolas Le Roux. 2006. Label propogation and quadratic criterion. MIT Press. Jon Louis Bentley. 1980. Multidimensional divide-and -conquer. Communications of the ACM, 23(4):214 229. Alina Beygelzimer, Sham Kakade, and John Langford. 2006. Cover trees for nearest neighbor. In Proceedings of ICML, pages 97-104, New York, USA Olivier Chapelle, Bernhard Sch¨o lkopf, and Alexander Zien. 2006. Semi-supervised learning. MIT Press. Samuel I. Daitch, Jonathan A. Kelner, and Daniel A. Spielman. 2009. Fitting a graph to vector data. In Proceedings of ICML, 201-208, NY, USA. Dipanjan Das and Noah A. Smith. 2011. Semisupervised framesemantic parsing for unknown predicates. In Proceedings of ACL, pages 14351444, Portland, Oregon, USA. Dipanjan Das and Slav Petrov. 2011. Unsupervised Part-of-Speech Tagging with Bilingual Graph-based Projections. In Proceedings of ACL, pages 14351444, Portland, Oregon, USA. Dipanjan Das and Noah A. Smith. 2012. Graph-based lexicon expansion with sparsity-inducing penalties. In Proceedings of NAACL, pages 677-687, Montr´eal, Canada. Tony Jebara, Jun Wang, and Shih-Fu Chang. 2009. Graph construction and b-matching for semisupervised learning. In Proceedings of ICML, 441448, New York, USA. Wenbin Jiang, Liang Huang, Qun Liu, and Yajuan Liu. 2008. A Cascaded Linear Model for Joint Chinese Word Segmentation and Part-of-Speech Tagging. In Proceedings of ACL, pages 897-904, Columbus, Ohio. Wenbin Jiang, Liang Huang, and Qun Liu. 2009. Automatic Adaptation of Annotation Standards: Chinese Word Segmentation and POS Tagging – A Case Study. In Proceedings of he ACL and the 4th IJCNLP of the AFNLP, pages 522–530, Suntec, Singapore. Feng Jiao, Shaojun Wang, and Chi-Hoon Lee. 2006. Semi-supervised conditional random fields for improved sequence segmentation and labeling. In In Proceedings of ACL, pages 209–216, Sydney, Australia. Canasai Kruengkrai, Kiyotaka Uchimoto, Jun’ichi Kazama, Yiou Wang, Kentaro Torisawa, and Hitoshi Isahara. 2009. An error-driven word-character hybrid model for joint Chinese word segmentation and POS tagging. In Proceedings of ACL and IJCNLP of the AFNLP, pages 513- 521, Suntec, Singapore August. 778 John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional Random Field: Probabilistic Models for Segmenting and Labeling Sequence Data. In Proceedings of ICML, pages 282289, Williams College, USA. Xiao Li. 2009. On the use of virtual evidence in conditional random fields. In Proceedings of EMNLP, pages 1289-1297, Singapore. Xiao Li, Ye-Yi Wang, and Alex Acero. 2009. Extracting structured information from user queries with semi-supervised conditional random fields In Proceedings of ACM SIGIR, pages 572-579, Boston, USA. Gideon S. Mann and Andrew McCallum. 2007. Efficient computation of entropy gradient for semisupervised conditional random fields. In Proceedings of NAACL, pages 109-112, New York, USA. McCallum and Andrew Kachites. 2002. MALLET: A Machine Learning for Language Toolkit. Software at http://mallet.cs.umass.edu. Tetsuji Nakagawa and Kiyotaka Uchimoto. 2007. A hybrid approach to word segmentation and POS tagging. In Proceedings of ACL Demo and Poster Session, pages 217–220, Prague, Czech Republic. Hwee Tou Ng and Jin Kiat Low 2004. Chinese partof-speech tagging: One-at-a-time or all-at-once? word-based or character-based? In Proceedings of EMNLP, Barcelona, Spain. Xian Qian and Yang Liu. 2012. Joint Chinese Word Segmentation, POS Tagging and Parsing. In Proceedings of EMNLP-CoNLL, pages 501-511, Jeju Island, Korea. Yanxin Shi and Mengqiu Wang. 2007. A dual-layer CRF based joint decoding method for cascade segmentation and labelling tasks. In Proceedings of IJCAI, Hyderabad, India. Amarnag Subramanya, Slav Petrov, and Fernando Pereira. 2010. Efficient graph-based semisupervised learning of structured tagging models. In Proceedings of EMNLP, pages 167-176, Massachusetts, USA. Weiwei Sun. 2011. A Stacked Sub-Word Model for Joint Chinese Word Segmentation and Part-ofSpeech Tagging. In Proceedings of ACL, pages 1385–1394, Portland, Oregon. Weiwei Sun and Jia Xu. 2011. Enhancing Chinese word segmentation using unlabeled data. In Proceedings of EMNLP, pages 970-979, Scotland, UK. Partha Pratim Talukdar, Joseph Reisinger, Marius Pasca, Deepak Ravichandran, Rahul Bhagat, and Fernando Pereira. 2008. Weakly Supervised Acquisition of Labeled Class Instances using Graph Random Walks. In Proceedings of EMNLP, pages 582590, Hawaii, USA. Partha Pratim Talukdar and Koby Crammer. 2009. New Regularized Algorithms for Transductive Learning. In Proceedings of ECML-PKDD, pages 442 - 457, Bled, Slovenia. Partha Pratim Talukdar and Fernando Pereira. 2010. Experiments in graph-based semi-supervised learning methods for class-instance acquisition. In Proceedings of ACL, pages 1473-1481, Uppsala, Sweden. Yiou Wang, Jun’ichi Kazama, Yoshimasa Tsuruoka, Wenliang Chen, Yujie Zhang, and Kentaro Torisawa. 2011. Improving Chinese word segmentation and POS tagging with semi-supervised methods using large auto-analyzed data. In Proceedings of IJCNLP, pages 309–317, Chiang Mai, Thailand. Yu-Chieh Wu Jie-Chi Yang, and Yue-Shi Lee. 2008. Description of the NCU Chinese Word Segmentation and Part-of-Speech Tagging for SIGHAN Bakeoff. In Proceedings of the SIGHAN Workshop on Chinese Language Processing, pages 161-166, Hyderabad, India. Jia Xu, Jianfeng Gao, Kristina Toutanova, and Hermann Ney. 2008. Bayesian semi-supervised chinese word segmentation for statistical machine translation. In Proceedings of COLING, pages 1017-1024, Manchester, UK. Yue Zhang and Stephen Clark. 2008. Joint word segmentation and POS tagging using a single perceptron. In Proceedings of EMNLP, pages 888-896, Columbus, Ohio. Yue Zhang and Stephen Clark. 2010. A fast decoder for joint word segmentation and POS-tagging using a single discriminative model. In Proceedings of EMNLP, pages 843-852, Massachusetts, USA. Hai Zhao, Chang-Ning Huang, Mu Li, and Bao-Liang Lu. 2006. Effective tag set selection in Chinese word segmentation via conditional random field modeling. In Proceedings of PACLIC, pages 87-94, Wuhan, China. Xiaojin Zhu, Zoubin Ghahramani, and John Lafferty. 2003. Semi-supervised learning using Gaussian fields and harmonic functions. In Proceedings of ICML, pages 912–919, Washington DC, USA. Ciyou Zhu, Richard H. Byrd, Peihuang Lu, and Jorge Nocedal. 1997. L-BFGS-B: Fortran subroutines for large scale bound constrained optimization. ACM Transactions on Mathematical Software, 23:550560. 779
2013
76
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 780–790, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics An Infinite Hierarchical Bayesian Model of Phrasal Translation Trevor Cohn Department of Computer Science The University of Sheffield Sheffield, United Kingdom [email protected] Gholamreza Haffari Faculty of Information Technology Monash University Clayton, Australia [email protected] Abstract Modern phrase-based machine translation systems make extensive use of wordbased translation models for inducing alignments from parallel corpora. This is problematic, as the systems are incapable of accurately modelling many translation phenomena that do not decompose into word-for-word translation. This paper presents a novel method for inducing phrase-based translation units directly from parallel data, which we frame as learning an inverse transduction grammar (ITG) using a recursive Bayesian prior. Overall this leads to a model which learns translations of entire sentences, while also learning their decomposition into smaller units (phrase-pairs) recursively, terminating at word translations. Our experiments on Arabic, Urdu and Farsi to English demonstrate improvements over competitive baseline systems. 1 Introduction The phrase-based approach (Koehn et al., 2003) to machine translation (MT) has transformed MT from a narrow research topic into a truly useful technology to end users. Leading translation systems (Chiang, 2007; Koehn et al., 2007; Marcu et al., 2006) all use some kind of multi-word translation unit, which allows translations to be produced from large canned units of text from the training corpus. Larger phrases allow for the lexical context to be considered in choosing the translation, and also limit the number of reordering decisions required to produce a full translation. Word-based translation models (Brown et al., 1993) remain central to phrase-based model training, where they are used to infer word-level alignments from sentence aligned parallel data, from which phrasal translation units are extracted using a heuristic. Although this approach demonstrably works, it suffers from a number of shortcomings. Firstly, many phrase-based phenomena which do not decompose into word translations (e.g., idioms) will be missed, as the underlying word-based alignment model is unlikely to propose the correct alignments. Secondly, the relationship between different phrase-pairs is not considered, such as between single word translations and larger multi-word phrase-pairs or where one large phrase-pair subsumes another. This paper develops a phrase-based translation model which aims to address the above shortcomings of the phrase-based translation pipeline. Specifically, we formulate translation using inverse transduction grammar (ITG), and seek to learn an ITG from parallel corpora. The novelty of our approach is that we develop a Bayesian prior over the grammar, such that a nonterminal becomes a ‘cache’ learning each production and its complete yield, which in turn is recursively composed of its child constituents. This is closely related to adaptor grammars (Johnson et al., 2007a), which also generate full tree rewrites in a monolingual setting. Our model learns translations of entire sentences while also learning their decomposition into smaller units (phrase-pairs) recursively, terminating at word translations. The model is richly parameterised, such that it can describe phrase-based phenomena while also explicitly modelling the relationships between phrasepairs and their component expansions, thus ameliorating the disconnect between the treatment of words versus phrases in the current MT pipeline. We develop a Bayesian approach using a PitmanYor process prior, which is capable of modelling a diverse range of geometrically decaying distributions over infinite event spaces (here translation phrase-pairs), an approach shown to be state of the art for language modelling (Teh, 2006). 780 We are not the first to consider this idea; Neubig et al. (2011) developed a similar approach for learning an ITG using a form of Pitman-Yor adaptor grammar. However Neubig et al.’s work was flawed in a number of respects, most notably in terms of their heuristic beam sampling algorithm which does not meet either of the Markov Chain Monte Carlo criteria of ergodicity or detailed balance. Consequently their approach does not constitute a valid Bayesian model. In contrast, this paper provides a more rigorous and theoretically sound method. Moreover our approach results in consistent translation improvements across a number of translation tasks compared to Neubig et al.’s method, and a competitive phrase-based baseline. 2 Related Work Inversion transduction grammar (or ITG) (Wu, 1997) is a well studied synchronous grammar formalism. Terminal productions of the form X → e/f generate a word in two languages, and nonterminal productions allow phrasal movement in the translation process. Straight productions, denoted by their non-terminals inside square brackets [...], generate their symbols in the given order in both languages, while inverted productions, indicated by angled brackets ⟨...⟩, generate their symbols in the reverse order in the target language. In the context of machine translation, ITG has been explored for statistical word alignment in both unsupervised (Zhang and Gildea, 2005; Cherry and Lin, 2007; Zhang et al., 2008; Pauls et al., 2010) and supervised (Haghighi et al., 2009; Cherry and Lin, 2006) settings, and for decoding (Petrov et al., 2008). Our paper fits into the recent line of work for jointly inducing the phrase table and word alignment (DeNero and Klein, 2010; Neubig et al., 2011). The work of DeNero and Klein (2010) presents a supervised approach to this problem, whereas our work is unsupervised hence more closely related to Neubig et al. (2011) which we describe in detail below. A number of other approaches have been developed for learning phrase-based models from bilingual data, starting with Marcu and Wong (2002) who developed an extension to IBM model 1 to handle multi-word units. This pioneering approach suffered from intractable inference and moreover, suffers from degenerate solutions (DeNero and Klein, 2010). Our approach is similar to these previous works, except that we impose additional constraints on how phrase-pairs can be tiled to produce a sentence pair, and moreover, we seek to model the embedding of phrase-pairs in one another, something not considered by this prior work. Another strand of related research is in estimating a broader class of synchronous grammars than ITGs, such as SCFGs (Blunsom et al., 2009b; Levenberg et al., 2012). Conceptually, our work could be readily adapted to general SCFGs using similar techniques. This work was inspired by adaptor grammars (Johnson et al., 2007a), a monolingual grammar formalism whereby a non-terminal rewrites in a single step as a complete subtree. The model prior allows for trees to be generated as a mixture of a cache and a base adaptor grammar. In our case, we have generalised to a bilingual setting using an ITG. Additionally, we have extended the model to allow recursive nesting of adapted non-terminals, such that we end up with an infinitely recursive formulation where the top-level and base distributions are explicitly linked together. As mentioned above, ours is not the first work attempting to generalise adaptor grammars for machine translation; (Neubig et al., 2011) also developed a similar approach based around ITG using a Pitman-Yor Process prior. Our approach improves upon theirs in terms of the model and inference, and critically, this is borne out in our experiments where we show uniform improvements in translation quality over a baseline system, as compared to their almost entirely negative results. We believe that their approach had a number of flaws: For inference they use a beam-search, which may speed up processing but means that they are no longer sampling from the true distribution, nor a distribution with the same support as the posterior. Moreover they include a Metropolis-Hastings correction step, which is required to correct the samples to account for repeated substructures which will be otherwise underrepresented. Consequently their approach does not constitute a Markov Chain Monte Carlo sampler, but rather a complex heuristic. The other respect in which this work differs from Neubig et al. (2011) is in terms of model formulation. They develop an ITG which generates phrase-pairs as terminals, while we employ a more restrictive word-based model which forces the decomposition of every phrase-pair. This is an important restriction as it means that we jointly learn 781 a word and phrase based model, such that word based phenomena can affect the phrasal structures. Finally our approach models separately the three different types of ITG production (monotone, swap and lexical emission), allowing for a richer parameterisation which the model exploits by learning different hyper-parameter values. 3 Model The generative process of the model follows that of ITG with the following simple grammar X →[X X] | ⟨X X⟩ X →e/f | e/⊥| ⊥/f , where [·] denotes monotone ordering and ⟨·⟩denotes a swap in one language. The symbol ⊥denotes the empty string. This corresponds to a simple generative story, with each stage being a nonterminal rewrite starting with X and terminating when there are no frontier non-terminals. A popular variant is a phrasal ITG, where the leaves of the ITG tree are phrase-pairs and the training seeks to learn a segmentation of the source and target which yields good phrases. We would not expect this model to do very well as it cannot consider overlapping phrases, but instead is forced into selecting between many competing – and often equally viable – options. Our approach improves over the phrasal model by recursively generating complete phrases. This way we don’t insist on a single tiling of phrases for a sentence pair, but explicitly model the set of hierarchically nested phrases as defined by an ITG derivation. This approach is closer in spirit to the phrase-extraction heuristic, which defines a set of ‘atomic’ terminal phrase-pairs and then extracts every combination of these atomic phase-pairs which is contiguous in the source and target.1 The generative process is that we draw a complete ITG tree, t ∼P2(·), as follows: 1. choose the rule type, r ∼R, where r ∈ {mono, swap, emit} 2. for r = mono (a) draw the complete subtree expansion, t = X →[. . .] ∼TM 3. for r = swap (a) draw the complete subtree expansion, t = X →⟨. . .⟩∼TS 1Our technique considers the subset of phrase-pairs which are consistent with the ITG tree. 4. for r = emit (a) draw a pair of strings, (e, f) ∼E (b) set t = X →e/f Note that we split the problem of drawing a tree into two steps: first choosing the top-level rule type and then drawing a rule of that type. This gives us greater control than simply drawing a tree of any type from one distribution, due to our parameterisation of the priors over the model parameters TM, TS and E. To complete the generative story, we need to specify the prior distributions for TM, TS and E. First, we deal with the emission distribution, E which we drawn from a Dirichlet Process prior E ∼DP(bE, P0). We restrict the emission rules to generate word pairs rather than phrase pairs.2 For the base distribution, P0, we use a simple uniform distribution over word pairs, P0(e, f) =      η2 1 VEVF e ̸= ⊥, f ̸= ⊥ η(1 −η) 1 VF e = ⊥, f ̸= ⊥ η(1 −η) 1 VE e ̸= ⊥, f = ⊥ , where the constant η denotes the binomial probability of a word being aligned.3 We use Pitman-Yor Process priors for the TM and TS parameters TM ∼PYP(aM, bM, P1(·|r = mono)) TS ∼PYP(aS, bS, P1(·|r = swap)) where P1(t1, t2|r) is a distribution over a pair of trees (the left and right children of a monotone or swap production). P1 is defined as follows: 1. choose the complete left subtree t1 ∼P2, 2. choose the complete right subtree t2 ∼P2, 3. set t = X →[t1 t2] or t = X →⟨t1 t2⟩ depending on r This generative process is mutually recursive: P2 makes draws from P1 and P1 makes draws from P2. The recursion is terminated when the rule type r = emit is drawn. Following standard practice in Bayesian models, we integrate out R, TM, TS and E. This means draws from P2 (or P1) are no longer iid: for any non-trivial tree, computing its probability under this model is complicated by the fact 2Note that we could allow phrases here, but given the model can already reason over phrases by way of its hierarchical formulation, this is an unnecessary complication. 3We also experimented with using word translation probabilities from IBM model 1, based on the prior used by Levenberg et al. (2012), however we found little empirical difference compared with this simpler uniform model. 782 that the probability of its two subtrees are interdependent. This is best understood in terms of the Chinese Restaurant Franchise (CRF; Teh et al. (2006)), which describes the posterior distribution after integrating out the model parameters. In our case we can consider the process of drawing a tree from P2 as a customer entering a restaurant and choosing where to sit, from an infinite set of tables. The seating decision is based on the number of other customers at each table, such that popular tables are more likely to be joined than unpopular or empty ones. If the customer chooses an occupied table, the identity of the tree is then set to be the same as for the other customers also seated there. For empty tables the tree must be sampled from the base distribution P1. In the standard CRF analogy, this leads to another customer entering the restaurant one step up in the hierarchy, and this process can be chained many times. In our case, however, every new table leads to new customers reentering the original restaurant – these correspond to the left and right child trees of a monotone or swap rule. The recursion terminates when a table is shared, or a new table is labelled with a emit rule. 3.1 Inference The probability of a tree (i.e., a draw from P2) under the model is P2(t) = P(r)P2(t|r) (1) where r is the rule type, one of mono, swap or emit. The distribution over types, P(r), is defined as P(r) = nT,− r + bT 1 3 nT,−+ bT where nT,−are the counts over rules of types.4 The second component in (1), P2(t|r), is defined separately for each rule type. For r = mono or r = swap rules, it is defined as P2(t|r) = n− t,r −K− t,rar n− r + br + K− r ar + br n− r + br P1(t1, t2|r) , (2) where n− t,r is the count for tree t in the other training sentences, K− t,r is the table count for t and n− r 4The conditioning on event and table counts, n−, K−is omitted for clarity. and K− r are the total count of trees and tables, respectively. Finally, the probability for r = emit is given by P2(t|r = emit) = n− t,E + bEP0(e, f) n− r + br , where t = X →e/f. To complete the derivation we still need to define P1, which is formulated as P1(t1, t2) = P2(t1)P2(t2|t1) , where the conditioning of the second recursive call to P2 reflects that the counts n−and K−may be affected by the first draw from P2. Although these two draws are assumed iid in the prior, after marginalising out T they are no longer independent. For this reason, evaluating P2(t) is computationally expensive, requiring tracking of repeated substructures in descendent sub-trees of t, which may affect other descendants. This results in an asymptotic complexity exponential in the number of nodes in the tree. For this reason we consider trees annotated with binary values denoting their table assignment, namely whether they share a table or are seated alone. Given this, the calculation is greatly simplified, and has linear complexity.5 We construct an approximating ITG following the technique used for sampling trees from monolingual tree-substitution grammars (Cohn et al., 2010). To do so we encode the first term from (2) separately from the second term (corresponding to draws from P1). Summing together these two alternate paths – i.e., during inside inference – we recover P2 as shown in (2). The full grammar transform for inside inference is shown in Table 1. The sampling algorithm closely follows the process for sampling derivations from Bayesian PCFGs (Johnson et al., 2007b). For each sentencepair, we first decrement the counts associated with its current tree, and then sample a new derivation. This involves first constructing the inside lattice using the productions in Table 1, and then performing a top-down sampling pass. After sampling each derivation from the approximating grammar, we then convert this into its corresponding ITG tree, which we then score with the full model and accept or reject the sample using the 5To support this computation, we track explicit table assignments for every training tree and their component subtrees. We also sample trees labelled with seating indicator variables. 783 Type X →M P(r = mono) X →S P(r = swap) X →E P(r = emit) Base M →[XX] K− MaM+bM n− M+bM S →⟨XX⟩ K− S aS+bS n− S +bS Count For every tree, t, of type r = mono, with nt,M > 0: M →sig(t) n− t,M−K− t,Mar n− M+bM sig(t) →yield(t) 1 For every tree, t, of type r = swap, with nt,S > 0: S →sig(t) n− t,S−K− t,SaS n− S +bS sig(t) →yield(t) 1 Emit For every word pair, e/f in sentence pair, where one of e, f can be ⊥: E →e/f P2(t) Table 1: Grammar transformation rules for MAP inside inference. The function sig(t) returns a unique identifier for the complete tree t, and the function yield(t) returns the pair of terminal strings from the yield of t. Metropolis-Hastings algorithm.6 Accepted samples then replace the old tree (otherwise the old tree is retained) and the model counts are incremented. This process is then repeated for each sentence pair in the corpus in a random order. 4 Experiments Datasets We train our model across three language pairs: Urdu→English (UR-EN), Farsi→English (FA-EN), and Arabic→English (AR-EN). The corpora statistics of these translation tasks are summarised in Table 2. The UR-EN corpus comes from NIST 2009 translation evaluation.7 The AR-EN training data consists of the eTIRR corpus (LDC2004E72), the Arabic news corpus (LDC2004T17), the Ummah corpus (LDC2004T18), and the sentences with confidence c > 0.995 in the ISI automatically extracted web parallel corpus (LDC2006T02). For FA-EN, we use TEP8 Tehran English-Persian Parallel corpus (Pilevar and Faili, 2011), which consists of conversational/informal text extracted 6The full model differs from the approximating grammar in that it accounts for inter-dependencies between subtrees by recursively tracking the changes in the customer and table counts while scoring the tree. Around 98% of samples were accepted in our experiments. 7http://www.itl.nist.gov/iad/mig/tests/mt/2009 8http://ece.ut.ac.ir/NLP/resources.htm source target sentences UR-EN 745K 575K 148K FA-EN 4.7M 4.4M 498K AR-EN 1.94M 2.08M 113K Table 2: Corpora statistics showing numbers of parallel sentences and source and target words for the training sets. from 1600 movie subtitles. We tokenized this corpus, removed noisy single-word sentences, randomly selected the development and test sets, and used the rest of the corpus as the training set. We discard sentences with length above 30 from the datasets for all experiments.9 Sampler configuration Samplers are initialised with trees created from GIZA++ alignments constructed using a SCFG factorisation method (Blunsom et al., 2009a). This algorithm represents the translation of a sentence as a large SCFG rule, which it then factorises into lower rank SCFG rules, a process akin to rule binarisation commonly used in SCFG decoding. Rules that cannot be reduced to a rank-2 SCFG are simplified by dropping alignment edges until they can be factorised, the net result being an ITG derivation largely respecting the alignments.10 The blocked sampler was run 1000 iterations for UR-EN, 100 iterations for FA-EN and AREN. After each full sampling iteration, we resample all the hyper-parameters using slice-sampling, with the following priors: a ∼ Beta(1, 1), b ∼Gamma(10, 0.1). Figure 1 shows the posterior probability improves with each full sampling iterations. The alignment probability was set to η = 0.99. The sampling was repeated for 5 independent runs, and we present results where we combine the outputs of these runs. This is a form of Monte Carlo integration which allows us to represent the uncertainty in the posterior, while also representing multiple modes, if present. The time complexity of our inference algorithm is O(n6), which can be prohibitive for large scale machine translation tasks. We reduce the complexity by constraining the inside inference to consider only derivations which are compatible 9Hence the BLEU scores we get for the baselines may appear lower than what reported in the literature. 10Using the factorised alignments directly in a translation system resulted in a slight loss in BLEU versus using the unfactorised alignments. Our baseline system uses the latter. 784 0 100 200 300 400 500 −9100000 −9050000 −9000000 −8950000 iteration log posterior Figure 1: Training progress on the UR-EN corpus, showing the posterior probability improving with each full sampling iteration. Different colours denote independent sampling runs. G G G G G G G G G GGG G G GG G G GG G G G G GG G G G G G G GG G G G G G GG G G G GG G G G GG G G GG GGG GG G G GG GG GG GGGG GG G G GGGGGGG G GG GG GGGGGG G G G G GGGGGGGGG G G G GGGGGGGGGGG G GGG GGGGGGGGGGG G G G GGGGGGGGGGGG G GG G GGGGGGGGGGGGGG GG GGGGGGGGGGGGGGGG GG G GGGGGGGGGGGGGGGG G G GGGGGGGGGGGGGGGGGG GG GGGGGGGGGGGGGGGGGGGG G G GGGGGGGGGGGGGGGGGGG G GGGGGGGGGGGGGGGGGG GG G GGGGGGGGGGGGGGGGGGG GGGGGGGGGGGGGGGGGGGGG G GGGGGGGGGGGGGGGGGGGG GGGGGGGGGGGGGGGGGGG GGGGGGGGGGGGGGGGGGG GGGGGGGGGGGGGGGGGG G GGGGGGGGGGGGGGGGGGG 0 5 10 15 20 25 30 1e−05 1e−03 1e−01 average sentence length time (s) Figure 2: The runtime cost of bottom-up inside inference and top-down sampling as a function of sentence length (UR-EN), with time shown on a logarithmic scale. Full ITG inference is shown with red circles, and restricted inference using the intersection constraints with blue triangles. The average time complexity for the latter is roughly O(l4), as plotted in green t = 2 × 10−7l4. with high confidence alignments from GIZA++.11 Figure 2 shows the sampling time with respect to the average sentence length, showing that our alignment-constrained sampling algorithm is better than the unconstrained algorithm with empirical complexity of n4. However, the time complexity is still high, so we set the maximum sentence length to 30 to keep our experiments practicable. Presumably other means of inference may be more efficient, such as Gibbs sampling (Levenberg et al., 2012) or auxiliary variable sampling (Blunsom and Cohn, 2010); we leave these extensions to future work. Baselines. Following (Levenberg et al., 2012; Neubig et al., 2011), we evaluate our model by using its output word alignments to construct a phrase table. As a baseline, we train a phrasebased model using the moses toolkit12 based on the word alignments obtained using GIZA++ in both directions and symmetrized using the growdiag-final-and heuristic13 (Koehn et al., 2003). This alignment is used as input to the rule factorisation algorithm, producing the ITG trees with which we initialise our sampler. To put our results in the context of the previous work, we also compare against pialign (Neubig et al., 2011), an ITG algorithm using a Pitman-Yor process prior, as described in Section 2.14 In the end-to-end MT pipeline we use a standard set of features: relative-frequency and lexical translation model probabilities in both directions; distance-based distortion model; language model and word count. We set the distortion limit to 6 and max-phrase-length to 7 in all experiments. We train 3-gram language models using modified Kneser-Ney smoothing. For AR-EN experiments the language model is trained on English data as (Blunsom et al., 2009a), and for FA-EN and UREN the English data are the target sides of the bilingual training data. We use minimum error rate training (Och, 2003) with nbest list size 100 to optimize the feature weights for maximum development BLEU. 11These are taken from the final model 4 word alignments, using the intersection of the source-target and target-source models. These alignments are very high precision (but have low recall), and therefore are unlikely to harm the model. 12http://www.statmt.org/moses 13We use the default parameter settings in both moses and GIZA++. 14http://www.phontron.com/pialign 785 Baselines This paper GIZA++ pialign individual combination UR-EN 16.95 15.65 16.68 ± .12 16.97 FA-EN 20.69 21.41 21.36 ± .17 21.50 AR-EN MT03 44.05 43.30 44.8 ± .28 45.10 MT04 38.15 37.78 38.4 ± .08 38.4 MT05 42.81 42.18 43.13 ± .23 43.45 MT08 32.43 33.00 32.7 ± .15 32.80 Table 3: The BLEU scores for the translation tasks of three language pairs. The individual column show the average and 95% confidence intervals for 5 independent runs, whereas the combination column show the results for combining the phrase tables of all these runs. The baselines are GIZA++ alignments and those generated by the pialign (Neubig et al., 2011) bold: the best result. 1 2 5 10 20 50 100 1e−05 1e−03 1e−01 rule frequency fraction of grammar monotone swap emit Figure 3: Fraction of rules with a given frequency, using a single sample grammar (UR-EN). 4.1 Results Table 3 shows the BLEU scores for the three translation tasks UR/AR/FA→EN based on our method against the baselines. For our models, we report the average BLEU score of the 5 independent runs as well as that of the aggregate phrase table generated by these 5 independent runs. There are a number of interesting observations in Table 3. Firstly, combining the phrase tables from independent runs results in increased BLEU scores, possibly due to the representation of uncertainty in the outputs, and the representation of different modes captured by the individual models. We believe this type of Monte Carlo model averaging should be considered in general when sampling techniques are employed for grammatical inference, e.g. in parsing and translation. Secondly, our approach consistently improves over the Giza++ baseline often by a large margin, whereas pialign underperforms the GIZA++ baseline in many cases. Thirdly, our model consistently outperforms pialign (except in AR-EN MT08 which is very close). This highlights the modeling and inference differences between our method and the pialign. 5 Analysis In this section, we present some insights about the learned grammar and the model hyper-parameters. Firstly, we start by presenting various statistics about different learned grammars. Figure 3 shows the fraction of rules with a given frequency for each of the three rule types. The three types of rule exhibit differing amounts of high versus low frequency rules, and all roughly follow power laws. As expected, there is a higher tendency to reuse high-frequency emissions (or single-word translation) compared to other rule types, which are the basic building blocks to compose larger rules (or phrases). Table 4 lists the high frequency monotone and swap rules in the learned grammar. We observe the high frequency swap rules capture reordering in verb clusters, preposition-noun inversions and adjective-noun reordering. Similar patterns are seen in the monotone rules, along with some common canned phrases. Note that “in Iraq” appears twice, once as an inversion in UR-EN and another time in monotone order for AR-EN. Secondly, we analyse the values learned for the model hyper-parameters; Figure 4.(a) shows the posterior distribution over the hyper-parameter values. There is very little spread in the inferred values, suggesting the sampling chains may have converged. Furthermore, there is a large difference between the learned hyper-parameters for the monotone rules versus the swap rules. For the Pitman-Yor Process prior, the values of the hyper786 6`bB@1M;HBb? TBHB;M bm`2fKhK M-+?B27f` vb- vQm bm`2fKhK Mv- BK bm`2fKhK MK- #Qbbf` vb- KF2 bm`2fKhK M`2 vQm bm`2fKhK Mv- Mvrvf?` >H- T`2bB/2Mif` vb DK?r`- MQi bm`2fKhK M MvbiK- BK bm`2fKhK MK F?- BǶK bm`2fKhK MK- bm`2fKhK M Qm` K2i?Q/ bm`2fKhK }- ?p2f/ $ i?- #2f# $- ?p2f/ $ i? # $- H2i K2f `- #2+mb2 Q7fth`bm`2fKhK } M- /QfF` `- +QK2 QMfxr/ #- 2t+mb2 K2f##t- FBHHf` #F- +QK2 QMfxr/#KQ`2 i?Mf$ i`- #2?BM/fT $- r?i /QfKMwr`i- r?i /Q vQmfKMwr`i - FBHHfF $- /QMi rQ``vfM;`M M#- Bb Bif$ /?- r2H+QK2ftr $ %- +?B27f` } vb- KF2 bm`2fKhK- BbfKv $KF2 bm`2fKhK }- KF2 bm`2fKhK} M- BK bQ``vf##t- H27if; - B7f;` $ `#B+@1M;HBb? TBHB;M bB/ Xfƛāƈ ćbii2bfƴĔŔłƢƛāmMBi2/fłāķāƛćƛāH@r7/f ů Ĕžćƛā2zQ`ibfƛĕŁQ7 Kbb /2bi`m+iBQMfƛƢāŦƛā ĢāƢĔƛā- vQmKfƢćķƛā- DBMiQfćāł Ʀķœ- HK H vQmKfƢćķƛā Ƣƛāŷƛā- H@BiiB?/ -fĔāŔłāƛā- i?2 }2H/ Q7fƛāœƢ- BbHK#/fƢāƛťā- b+?2/mH2/fĢĢƈƢƛā ƦƢ- H@HK H@vQmKfƢćķƛā Ƣƛāŷƛā ů- f H@?vifů ƴāķŔƛā- T2MBMbmHfƴĢķģœƛā ƱŁŦ- K2Mr?BH2fƱťžƦ łƈćƛā- T`BK2fƹāĢģć- i?2 BMpBiiBQMfƦƢ ƴćŷĔ- f H@?vi -fů ƴāķŔƛā- FQ`2M T2MBMbmHfƴķĢćƋƛā ƴĢķģœƛā ƱŁŦ- H@M?` dzfĢāƱƦƛā DD- /2T`iK2MifƴķœĢāŕƛā ƴĢāģć- +Qi2fłćƋ- b TQbbB#H2fƦƋƢƢ łƈć- H HK H vQmKfƢćķƛā Ƣƛāŷƛā- H@HK H@vQmK -fƢćķƛā Ƣƛāŷƛā- i i?2 BMpBiiBQMfƦƢ ƴćŷĔ Ł- D+[m2bfƋāœ ķťƦĢžƛā- r2HH bfāĕƋTQBMibfķƛķ- pH/BKB` TmiBMfƦķłćŁ ĢķƢķĔāƛž ķťćĢƛā- ;2Q`;2 rX #mb?fœĢćœ Qm` K2i?Q/ i?2 mMBi2/fƴĔŔłƢƛā- mb /QHH`bfķƋķĢƢā- T`BK2fƹāĢģćƛā ťķĶĢ- +?BM ǶfƦķůƛā- bTQF2bKMfŃĔŔłƢƛā Ł- KMvfƦƢ- Bb 2tT2+i2/fŷƈćłƢƛā- Bb 2tT2+i2/ iQfŷƈćłƢƛā- i H2bifƛƈāƛā- QM im2b/vfƢćķ- 2;vTi ǶfĢůƢ- i?m`b/vfƢćķ- i?2 mMfƴĔŔłƢƛā- QM i?m`b/vfƢćķ- 7`B/vfƢćķ- QM 7`B/vfƢćķ- iQfŁťŔH@r7/ -fů Ĕžćƛā- i?2 mbfƴĔŔłƢƛā- 7Q`fƛ ƴŁťƦƛā- }`bi iBK2fĿƛćāƛā ƴĢƢƛā- 7m`i?2`fƦƢ- B`[ ǶfƈāĢŷƛā- Bb`2HB T`BK2fķƛķĶāĢťāƛā ƹāĢģćƛā ťķĶĢ- i?2 irQfƦķĔƛŁƛā- QM bim`/vfƢćķ- QM bmM/vfƢćķmXbXfƴĔŔłƢƛā- pB2rbfĢŵƦƛā- b?`QM ǶfƦćĢāŦ- +QmMi`v ǶfĔāƛŁƛā- ?2 bB/fĢƋĕ- Bb`2H ǶfƛķĶāĢťā- T2QTH2 ǶfŁŷŦƛā- ?2`2fƢćķƛā āƦƱ- +?BM ǶfƦķůƛƛ- - ?2 bB/fžāŰā ć- 2`HB2`fłƈć Ŀž- +?BM ǶfƴķƦķůƛāi H2bifƛƈķ āƛ āƢ- i?2 mXbXfƴĔŔłƢƛā- i?2 ;xfŷāŴƈ- i?2 ;x bi`BTfŷāŴƈ- ?2 //2/fƛāƈ- `2 2tT2+i2/fŷƈćłƢƛā- `2 2tT2+i2/ iQfŷƈćłƢƛā- `2 2tT2+i2/ iQfŷƈćłƢƛā ƦƢ- KBHHBQM mXbXfƦćķƛƢ++Q`/BM;fƛāƈ- iQfƛůāćł- Q`/2`fƦƢ- BM Q`/2`fƦƢ- ?2 TQBMi2/fĢāŦā- K7 - b?`[ Hfů Ŵťćāƛā- K7 - b?`[ H rbifů Ŵťćāƛā- `7i ǶfłāžĢŷ k Table 5: Good phrase pairs in the top-100 high frequency phrase pairs specific to the phrase tables coming from our method vs that of pialign for FA-EN and AR-EN translation tasks. parameters affects the rate at which the number of types grows compared to the number of tokens. Specifically, as the discount a or the concentration b parameters increases we expect for a relative increase in the number of types. If the number of observed monotone and swap rules were equal, then there would be a higher chance in reusing the monotone rules. However, the number of observed monotone and swap rules are not equal, as plotted in Figure 4.(b). Similar results were observed for the other language pairs (figures omitted for space reasons). Thirdly, we performed a manual evaluation for the quality of the phrase-pairs learned exclusively by our method vs pialign. For each method, we considered the top-100 high frequency phrasepairs which are specific to that method. Then we asked a bilingual human expert to identify reasonably well phrase-pairs among these top-100 phrase-pairs. The results are summarized in Table 5, and show that we learn roughly twice as many reasonably good phrase-pairs for AR-EN and FA-EN compared to pialign. Conclusions We have presented a novel method for learning a phrase-based model of translation directly from parallel data which we have framed as learning an inverse transduction grammar (ITG) using a recursive Bayesian prior. This has led to a model which learns translations of entire sentences, while also learning their decomposition into smaller units (phrase-pairs) recursively, terminating at word translations. We have presented a Metropolis-Hastings sampling algorithm for blocked inference in our non-parametric ITG. Our experiments on Urdu-English, ArabicEnglish, and Farsi-English translation tasks all demonstrate improvements over competitive baseline systems. Acknowledgements The first author was supported by the EPSRC (grant EP/I034750/1) and an Erasmus-Mundus scholarship funding a research visit to Melbourne. The second author was supported by an early career research award from Monash University. 787 0.905 0.910 0.915 0.920 0.925 0 200 400 600 800 1000 am and as Density 1000 2000 3000 4000 0.0000 0.0010 0.0020 bm and bs Density 0 5 10 15 20 25 30 0.00 0.01 0.02 0.03 0.04 0.05 0.06 be Density 65000 65500 66000 0.0000 0.0005 0.0010 0.0015 bt Density (a) 291000 292000 293000 0.0000 0.0004 0.0008 0.0012 monotone 176000 177000 0.0000 0.0004 0.0008 0.0012 swap Density (b) Figure 4: (a) Posterior over the hyper-parameters, aM, aS, bM, bS, bE, bT , measured for UR-EN using samples 400–500 for 3 independent sampling chains, and the intersection constraints. (b) Posterior over the number of monotone and swap rules in the resultant grammars. The distribution for emission rules was also peaked about 147k rules. key ( ( ( ?2fM?rL ) ( ⊥fMu ) ) ( bB/fF? ) ) kkk ( ⟨( bB/fF? ) ( ⊥fMu ) ⟩( i?ifF? ) ) kRN ( ( ( ( ?2fM?rL ) ( ⊥fMu ) ) ( bB/fF? ) ) ( i?ifF? ) ) R93 ( ( ( B7f;` ) ( ⊥f% ) ) ( vQmfT ) ) Ry3 ( ( ?2fM?rL ) ⟨( bB/fF? ) ( ⊥fMu ) ⟩) R3k ⟨( rBHHf; ) ( #2f?r ) ⟩ RkN ⟨( Bbf?u ) ( MQifM?vL ) ⟩ Rkj ⟨( ?bf?u ) ( #22Mf;v ) ⟩ Ry9 ⟨( rBHHf; ) ( #2fDvu ) ⟩ Ryj ⟨( BMfKvL ) ( B`[f1`[ ) ⟩ l`/m@1M;HBb? 3Ny ( ( QM2fvFv ) ( Q7fx ) ) 39j ( ⟨( v2?f`? ) ( ⊥f% ) ⟩( XfX ) ) dj3 ( ( rBi?f# ) ( K2fKM ) ) e99 ( ( ( ( QFvf# ) ( ⊥f$ ) ) ( ⊥f? ) ) ( XfX ) ) ey3 ( ( iQf#? ) ( K2fKM ) ) k8R ⟨( Bbf/? ) ( Bif$ ) ⟩ kky ⟨( i2HHf#;r ) ( K2fKM ) ⟩ RNN ⟨I ( Bf⊥) ( +MfirMK ) ⟩( ǶifMKv ) = RNy ⟨( ( r?QfFv ) ( `2f?biv ) ) ( vQmfir ) ⟩ R3d ⟨( iQH/f;7i ) ( K2fKM ) ⟩ 6`bB@1M;HBb? 8ee ( ( BMfĿž ) ( B`[fƈāĢŷƛā ) ) 9R9 ( ( BMfķž ) ( 2;vTifĢůƢ ) ) jNR ( ( i?BbfāĕƱ ) ( v2`fƢāŷƛā ) ) j8e ( ( b?`[fƈĢŦƛā ) ( H@rbifŴťćāƛā ) ) jyy ( ( BMfķž ) ( B`[fƈāĢŷƛā ) ) 9yk9 ⟨( Xf ) ( Ǵf ) ⟩ RjRk ⟨I ( i?2f ) ( mMBi2/fƴĔŔłƢƛā ) ⟩ ( bii2bfłāķāƛćƛā ) = ee8 ⟨( mMBi2/fƴĔŔłƢƛā ) ( bii2bfłāķāƛćƛā ) ⟩ e8y ⟨( HbifķŰāƢƛā ) ( v2`fƢāŷƛā ) ⟩ 9ed ⟨I ( i?2f ) ( mMBi2/fƴĔŔłƢƛā ) ⟩ ( MiBQMbfƢƢāƛā ) = `#B+@1M;HBb? aQK2 HiBM i2ti M/ BMHBM2 `#B+, ƢƋķƛŷ Ƣāƛťƛā l`/m- 6`bB- `#B+ Table 4: Top 5 monotone and swap productions and their counts. Rules with mostly punctuation or encoding 1:many or many:1 alignments were omitted. 788 References Phil Blunsom and Trevor Cohn. 2010. Inducing synchronous grammars with slice sampling. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 238–241, Los Angeles, California, June. Association for Computational Linguistics. Phil Blunsom, Trevor Cohn, Chris Dyer, and Miles Osborne. 2009a. A Gibbs sampler for phrasal synchronous grammar induction. In ACL2009, Singapore, August. Phil Blunsom, Trevor Cohn, and Miles Osborne. 2009b. Bayesian synchronous grammar induction. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, Advances in Neural Information Processing Systems 21, pages 161–168. MIT Press. P. F. Brown, S. A. Della Pietra, V. J. Della Pietra, and R. L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263–311. Colin Cherry and Dekang Lin. 2006. Soft syntactic constraints for word alignment through discriminative training. In Proceedings of COLING/ACL. Association for Computational Linguistics. Colin Cherry and Dekany Lin. 2007. Inversion transduction grammar for joint phrasal translation modeling. In Proc. of the HLT-NAACL Workshop on Syntax and Structure in Statistical Translation (SSST 2007), Rochester, USA. David Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics, 33(2):201–228. Trevor Cohn, Phil Blunsom, and Sharon Goldwater. 2010. Inducing tree-substitution grammars. Journal of Machine Learning Research, pages 3053–3096. John DeNero and Dan Klein. 2010. Discriminative modeling of extraction sets for machine translation. In The 48th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL). Aria Haghighi, John Blitzer, and Dan Klein. 2009. Better word alignments with supervised itg models. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, Suntec, Singapore. Association for Computational Linguistics. Mark Johnson, Thomas L. Griffiths, and Sharon Goldwater. 2007a. Adaptor grammars: A framework for specifying compositional nonparametric bayesian models. In B. Sch¨olkopf, J. Platt, and T. Hoffman, editors, Advances in Neural Information Processing Systems 19, pages 641–648. MIT Press, Cambridge, MA. Mark Johnson, Thomas L Griffiths, and Sharon Goldwater. 2007b. Bayesian inference for PCFGs via Markov chain Monte Carlo. In Proc. of the 7th International Conference on Human Language Technology Research and 8th Annual Meeting of the NAACL (HLT-NAACL 2007), pages 139–146. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proc. of the 3rd International Conference on Human Language Technology Research and 4th Annual Meeting of the NAACL (HLT-NAACL 2003), pages 81–88, Edmonton, Canada, May. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proc. of the 45th Annual Meeting of the ACL (ACL2007), Prague. Abby Levenberg, Chris Dyer, and Phil Blunsom. 2012. A Bayesian model for learning SCFGs with discontiguous rules. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 223–232, Jeju Island, Korea, July. Association for Computational Linguistics. Daniel Marcu and William Wong. 2002. A phrasebased, joint probability model for statistical machine translation. In Proc. of the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP-2002), pages 133–139, Philadelphia, July. Association for Computational Linguistics. Daniel Marcu, Wei Wang, Abdessamad Echihabi, and Kevin Knight. 2006. SPMT: Statistical machine translation with syntactified target language phrases. In Proc. of the 2006 Conference on Empirical Methods in Natural Language Processing (EMNLP2006), pages 44–52, Sydney, Australia, July. Graham Neubig, Taro Watanabe, Eiichiro Sumita, Shinsuke Mori, and Tatsuya Kawahara. 2011. An unsupervised model for joint phrase alignment and extraction. In The 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-HLT), pages 632–641, Portland, Oregon, USA, 6. Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proc. of the 41st Annual Meeting of the ACL (ACL-2003), pages 160– 167, Sapporo, Japan. Adam Pauls, Dan Klein, David Chiang, and Kevin Knight. 2010. Unsupervised syntactic alignment with inversion transduction grammars. In Proceedings of the North American Conference of the Association for Computational Linguistics (NAACL). Association for Computational Linguistics. 789 Slav Petrov, Aria Haghighi, and Dan Klein. 2008. Coarse-to-fine syntactic machine translation using language projections. In Proceedings of EMNLP. Association for Computational Linguistics. M. T. Pilevar and H. Faili. 2011. Tep: Tehran englishpersian parallel corpus. In Proc. International Conference on Intelligent Text Processing and Computational Linguistics (CICLing). Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. 2006. Hierarchical Dirichlet processes. Journal of the American Statistical Association, 101(476):1566–1581. Y. W. Teh. 2006. A hierarchical Bayesian language model based on Pitman-Yor processes. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 985–992. Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):377–403. Hao Zhang and Daniel Gildea. 2005. Stochastic lexicalized inversion transduction grammar for alignment. In Proceedings of the 43rd Annual Conference of the Association for Computational Linguistics (ACL). Association for Computational Linguistics. Hao Zhang, Chris Quirk, Robert C. Moore, and Daniel Gildea. 2008. Bayesian learning of noncompositional phrases with synchronous parsing. In Proc. of the 46th Annual Conference of the Association for Computational Linguistics: Human Language Technologies (ACL-08:HLT), pages 97–105, Columbus, Ohio, June. 790
2013
77
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 791–801, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Additive Neural Networks for Statistical Machine Translation Lemao Liu1, Taro Watanabe2, Eiichiro Sumita2, Tiejun Zhao1 1School of Computer Science and Technology Harbin Institute of Technology (HIT), Harbin, China 2National Institute of Information and Communication Technology (NICT) 3-5 Hikari-dai, Seika-cho, Soraku-gun, Kyoto, Japan {lmliu | tjzhao}@mtlab.hit.edu.cn {taro.watanabe | eiichiro.sumita}@nict.go.jp Abstract Most statistical machine translation (SMT) systems are modeled using a loglinear framework. Although the log-linear model achieves success in SMT, it still suffers from some limitations: (1) the features are required to be linear with respect to the model itself; (2) features cannot be further interpreted to reach their potential. A neural network is a reasonable method to address these pitfalls. However, modeling SMT with a neural network is not trivial, especially when taking the decoding efficiency into consideration. In this paper, we propose a variant of a neural network, i.e. additive neural networks, for SMT to go beyond the log-linear translation model. In addition, word embedding is employed as the input to the neural network, which encodes each word as a feature vector. Our model outperforms the log-linear translation models with/without embedding features on Chinese-to-English and Japanese-to-English translation tasks. 1 Introduction Recently, great progress has been achieved in SMT, especially since Och and Ney (2002) proposed the log-linear model: almost all the stateof-the-art SMT systems are based on the log-linear model. Its most important advantage is that arbitrary features can be added to the model. Thus, it casts complex translation between a pair of languages as feature engineering, which facilitates research and development for SMT. Regardless of how successful the log-linear model is in SMT, it still has some shortcomings. This joint work was done while the first author visited NICT. On the one hand, features are required to be linear with respect to the objective of the translation model (Nguyen et al., 2007), but it is not guaranteed that the potential features be linear with the model. This induces modeling inadequacy (Duh and Kirchhoff, 2008), in which the translation performance may not improve, or may even decrease, after one integrates additional features into the model. On the other hand, it cannot deeply interpret its surface features, and thus can not efficiently develop the potential of these features. What may happen is that a feature p does initially not improve the translation performance, but after a nonlinear operation, e.g. log(p), it does. The reason is not because this feature is useless but the model does not efficiently interpret and represent it. Situations such as this confuse explanations for feature designing, since it is unclear whether such a feature contributes to a translation or not. A neural network (Bishop, 1995) is a reasonable method to overcome the above shortcomings. However, it should take constraints, e.g. the decoding efficiency, into account in SMT. Decoding in SMT is considered as the expansion of translation states and it is handled by a heuristic search (Koehn, 2004a). In the search procedure, frequent computation of the model score is needed for the search heuristic function, which will be challenged by the decoding efficiency for the neural network based translation model. Further, decoding with non-local (or state-dependent) features, such as a language model, is also a problem. Actually, even for the (log-) linear model, efficient decoding with the language model is not trivial (Chiang, 2007). In this paper, we propose a variant of neural networks, i.e. additive neural networks (see Section 3 for details), for SMT. It consists of two components: a linear component which captures nonlocal (or state dependent) features and a non-linear component (i.e., neural nework) which encodes lo791 X te „ X Ë} \ X X friendly cooperation over the last years Figure 1: A bilingual tree with two synchronous rules, r1 : X →⟨Ë} \; friendly cooperation⟩ and r2 : X →⟨te „ X; X over the last years⟩. The inside rectangle denotes the partial derivation d1 = {r1} with the partial translation e1 =“friendly cooperation”, and the outside rectangle denotes the derivation d2 = {r1, r2} with the translation e2=“friendly cooperation over the last years”. cal (or state independent) features. Compared with the log-linear model, it has more powerful expressive abilities and can deeply interpret and represent features with hidden units in neural networks. Moreover, our method is simple to implement and its decoding efficiency is comparable to that of the log-linear model. We also integrate word embedding into the model by representing each word as a feature vector (Collobert and Weston, 2008). Because of the thousands of parameters and the non-convex objective in our model, efficient training is not simple. We propose an efficient training methodology: we apply the mini-batch conjugate sub-gradient algorithm (Le et al., 2011) to accelerate the training; we also propose pre-training and post-training methods to avoid poor local minima. The biggest contribution of this paper is that it goes beyond the log-linear model and proposes a non-linear translation model instead of re-ranking model (Duh and Kirchhoff, 2008; Sokolov et al., 2012). On both Chinese-to-English and Japanese-toEnglish translation tasks, experiment results show that our model can leverage the shortcomings suffered by the log-linear model, and thus achieves significant improvements over the log-linear based translation. 2 Log-linear Model, Revisited 2.1 Log-linear Translation Model Och and Ney (2002) proposed the log-linear translation model, which can be formalized as follows: P(e, d|f; W) = exp  W ⊤· h(f, e, d) P e′,d′ exp  W ⊤· h(f, e′, d′) , (1) where f denotes the source sentence, and e(e′) denotes its translation candidate; d(d′) is a derivation over the pair ⟨f, e⟩, i.e., a collection of synchronous rules for Hiero grammar (Chiang, 2005), or phrase pairs in Moses (Koehn et al., 2007); h(f, e, d) = (h1(f, e, d), h2(f, e, d), · · · , hK(f, e, d))⊤ is a K-dimensional feature vector defined on the tuple ⟨f, e, d⟩; W = (w1, w2, · · · , wK)⊤is a Kdimensional weight vector of h, i.e., the parameters of the model, and it can be tuned by the toolkit MERT (Och, 2003). Different from Brown’s generative model (Brown et al., 1993), the loglinear model does not assume strong independency holds, and allows arbitrary features to be integrated into the model easily. In other words, it can transform complex language translation into feature engineering: it can achieve high translation performance if reasonable features are chosen and appropriate parameters are assigned for the weight vector. 2.2 Decoding By Search Given a source sentence f and a weight W, decoding finds the best translation candidate ˆe via the programming problem: ⟨ˆe, ˆd⟩= arg max e,d P(e, d|f; W) = arg max e,d  W ⊤· h(f, e, d) . (2) Since the range of ⟨e, d⟩is exponential with respect to the size of f, the exact decoding is intractable and an inexact strategy such as beam search is used instead in practice. The idea of search for decoding can be shown in Figure 1: it encodes each search state as a partial translation together with its derivation, e.g. ⟨e1, d1⟩; it consequently expands the states from the initial (empty) state to the end state ⟨e2, d2⟩ according to the translation rules r1 and r2. During the state expansion process, the score wi · 792 hi(f, e, d) for a partial translation is calculated repeatedly. In the log-linear model, if hi(f, e, d) is a local feature, the calculation of its score wi · hi(f, e, d) has a substructure, and thus it can be calculated with dynamic programming which accelerates its decoding. For the non-local features such as the language model, Chiang (2007) proposed a cube-pruning method for efficient decoding. The main reason why cube-pruning works is that the translation model is linear and the model score for the language model is approximately monotonic (Chiang, 2007). 3 Additive Neural Networks 3.1 Motivation Although the log-linear model has achieved great progress for SMT, it still suffers from some pitfalls: it requires features be linear with the model and it can not interpret and represent features deeply. The neural network model is a reasonable method to overcome these pitfalls. However, the neural network based machine translation is far from easy. As mentioned in Section 2, the decoding procedure performs an expansion of translation states. Firstly, let us consider a simple case in neural network based translation where all the features in the translation model are independent of the translation state, i.e. all the components of the vector h(f, e, d) are local features. In this way, we can easily define the following translation model with a single-layer neural network: S(f, e, d; W, M, B) = W ⊤· σ(M · h(f, e, d) + B), (3) where M ∈Ru×K is a matrix, and B ∈Ru is a vector, i.e. bias; σ is a single-layer neural network with u hidden units, i.e. an element wise sigmoid function sigmoid(x) = 1/ 1 + exp(−x)  . For consistent description in the rest, we also represent Eq. (3) as a function of a feature vector h, i.e. S(h; W, M, B) = W ⊤· σ(M · h + B). Now let us consider the search procedure with the model in Eq. (3) using Figure 1 as our example. Suppose the current translation state is encoded as ⟨e1, d1⟩, which is expanded into ⟨e2, d2⟩ using the rule r2 (d2 = d1 ∪{r2}). Since h is state-independent, h(f, e2, d2) = h(f, e1, d1) + h(r2). However, since S(f, e, d; W, M, B) is nondecomposable as a linear model, there is no substructure for calculating S(f, e2, d2; W, M, B), and one has to re-calculate it via Eq. (3) even if the score of S(f, e1, d1; M, B) for its previous state ⟨e1, d1⟩is available. When the size of the parameter (W, M, B) is relatively large, it will be a challenge for the decoding efficiency. In order to keep the substructure property, S(f, e2, d2; W, M, B) should be represented as F S(f, e1, d1; W, M, B); S(h(r2); M, B)  by a function F. For simplicity, we suppose that the additive property holds in F, and then we can obtain a new translation model via the following recursive equation: S(f, e2, d2; W, M, B) = S(f, e1, d1; W, M, B) + S h(r2); W, M, B  . (4) Since the above model is defined only on local features, it ignores the contributions from nonlocal features. Actually, existing works empirically show that some non-local features, especially language model, contribute greatly to machine translation. Scoring for non-local features such as a ngram language model is not easily done. In loglinear translation model, Chiang (2007) proposed a cube-pruning method for scoring the language model. The premise of cube-pruning is that the language model score is approximately monotonic (Chiang, 2007). However, if scoring the language model with a neural network, this premise is difficult to hold. Therefore, one of the solutions is to preserve a linear model for scoring the language model directly. 3.2 Definition According to the above analysis, we propose a variant of a neural network model for machine translation, and we call it Additive Neural Networks or AdNN for short. The AdNN model is a combination of a linear model and a neural network: non-local features, e.g. LM, are linearly modeled for the cubepruning strategy, and local features are modeled by the neural network for deep interpretation and representation. Formally, the AdNN based translation model is discriminative but non-probabilistic, and it can be defined as follows: S(f, e, d; θ) = W ⊤· h(f, e, d)+ X r∈d W ′⊤· σ M · h′(r) + B  , (5) 793 where h and h′ are feature vectors with dimension K and K′ respectively, and each component of h′ is a local feature which can be defined on a rule r : X →⟨α, γ⟩; θ = (W, W ′, M, B) is the model parameters with M ∈Ru×K′. In this paper, we focus on a single-layer neural network for its simplicity, and one can similarly define σ as a multilayer neural network. Again for the example shown in Figure 1, the model score defined in Eq. (5) for the pair ⟨e2, d2⟩ can be represented as follows: S(f, e2, d2; θ) = W ⊤· h(f, e2, d2)+ W ′⊤·σ M·h′(r1)+B  +W ′⊤·σ M·h′(r2)+B  . Eq. (5) is similar to both additive models (Buja et al., 1989) and generalized additive neural networks (Potts, 1999): it consists of many additive terms, and each term is either a linear or a nonlinear (a neural network) model. That is the reason why our model is called “additive neural networks”. Of course, our model still has some differences from both of them. Firstly, our model is decomposable with respect to rules instead of the component variables. Secondly, some of its additive terms share the same parameters (M, B). There are also strong relationships between AdNN and the log-linear model. If we consider the parameters (M, B) as constant and σ M · h′(r) + B  as a new feature vector, then AdNN is reduced to a log-linear model. Since both (M, B) and (W, W ′) are parameters in AdNN, our model can jointly learn the feature σ M · h′(r) + B  and tune the weight (W, W ′) of the log-linear model together. That is different from most works under the log-linear translation framework, which firstly learn features or sub-models and then tune the log-linear model including the learned features in two separate steps. By joint training, AdNN can learn the features towards the translation evaluation metric, which is the main advantage of our model over the log-linear model. In this paper, we apply our AdNN model to hierarchical phrase based translation, and it can be similarly applied to phrase-based or syntax-based translation. Similar to Hiero (Chiang, 2005), the feature vector h in Eq. (5) includes 8 default features, which consist of translation probabilities, lexical translation probabilities, word penalty, glue rule penalty, synchronous rule penalty and language model. These default features are included because they empirically perform well in the loglinear model. For the local feature vector h′ in Eq (5), we employ word embedding features as described in the following subsection. 3.3 Word Embedding features for AdNN Word embedding can relax the sparsity introduced by the lexicalization in NLP, and it improves the systems for many tasks such as language model, named entity recognition, and parsing (Collobert and Weston, 2008; Turian et al., 2010; Collobert, 2011). Here, we propose embedding features for rules in SMT by combining word embeddings. Firstly, we will define the embedding for the source side α of a rule r : X →⟨α, γ⟩. Let VS be the vocabulary in the source language with size |VS|; Rn×|VS| be the word embedding matrix, each column of which is the word embedding (ndimensional vector) for the corresponding word in VS; and maxSource be the maximal length of α for all rules. We further assume that the α for all rules share the same length as maxSource; otherwise, we add maxSource −|α| words “NULL” to the end of α to obtain a new α. We define the embedding of α as the concatenation of the word embedding of each word in α. In particular, for the non-terminal in α, we define its word embedding as the vector whose components are 0.1; and we define the word embedding of “NULL” as 0. Then, we similarly define the embedding for the target side of a rule, given an embedding matrix for the target vocabulary. Finally, we define the embedding of a rule as the concatenation of the embedding of its source and target sides. In this paper, we apply the word embedding matrices from the RNNLM toolkit (Mikolov et al., 2010) with the default settings: we train two RNN language models on the source and target sides of training corpus, respectively, and then we obtain two matrices as their by-products1. It would be potentially better to train the word embedding matrix from a much larger corpus as (Collobert and Weston, 2008), and we will leave this as a future task. 3.4 Decoding Substituting the P(e, d|f; W) in Eq. (2) with S(f, e, d; θ) in Eq. (5), we can obtain its corre1In the RNNLM toolkit, the default dimension for word embedding is n = 30. In our experiments, the maximal length of α and γ are 5 and 12 respectively. Thus the dimension for h′ is K′ = 30 × (5 + 12) = 510. 794 sponding decoding formula: ⟨ˆe, ˆd⟩= arg max e,d S(f, e, d; θ). Given the model parameter θ = (W, W ′, M, B), if we consider (M, B) as constant and σ M ·h′(r)+ B  as an additional feature vector besides h, then Eq. (5) goes back to being a log-linear model with parameter (W, W ′). In this way, the decoding for AdNN can share the same search strategy and cube pruning method as the log-linear model. 4 Training Method 4.1 Training Objective For the log-linear model, there are various tuning methods, e.g. MERT (Och, 2003), MIRA (Watanabe et al., 2007; Chiang et al., 2008), PRO (Hopkins and May, 2011) and so on, which iteratively optimize a weight such that, after re-ranking a k-best list of a given development set with this weight, the loss of the resulting 1-best list is minimal. In the extreme, if the k-best list consists only of a pair of translations ⟨⟨e∗, d∗⟩, ⟨e′, d′⟩⟩, the desirable weight should satisfy the assertion: if the BLEU score of e∗is greater than that of e′, then the model score of ⟨e∗, d∗⟩with this weight will be also greater than that of ⟨e′, d′⟩. In this paper, a pair ⟨e∗, e′⟩for a source sentence f is called as a preference pair for f. Following PRO, we define the following objective function under the maxmargin framework to optimize the AdNN model: 1 2 ∥θ∥2 + λ N X f X e∗,d∗,e′,d′ δ(f, e∗, d∗, e′, d′; θ), (6) with δ(·) = max  S(f, e′, d′; θ) −S(f, e∗, d∗; θ) + 1, 0 where f is a source sentence in a given development set, and ⟨⟨e∗, d∗⟩, ⟨e′, d′⟩⟩is a preference pair for f; N is the number of all preference pairs; λ > 0 is a regularizer. 4.2 Optimization Algorithm Since there are thousands of parameters in Eq. (6) and the tuning in SMT will minimize Eq. (6) repeatedly, efficient and scalable optimization methods are required. Following Le et al. (2011), we apply the mini-batch Conjugate Sub-Gradient (mini-batch CSG) method to minimize Eq. (6). Compared with the sub-gradient descent, minibatch CSG has some advantages: (1) it can accelerate the calculation of the sub-gradient since it calculates the sub-gradient on a subset of preference pairs (i.e. mini-batch) instead of all of the preference pairs; (2) it reduces the number of iterations since it employs the conjugate information besides the sub-gradient. Algorithm 1 shows the procedure to minimize Eq. (6). Algorithm 1 Mini-batch conjugate subgradient Input: θ1, T, CGIter, batch-size, k-best-list 1: for all t such that 1 ≤t ≤T do 2: Sample mini-batch preference pairs with size batch-size from k-best-list 3: Calculate some quantities for CG, e.g. training objective Obj, subgradient ∆, according to Eq. (6) defined over the sampled preference pairs 4: θt+1 = CG(θt, Obj, ∆, CGIter) 5: end for Output: θT+1 In detail, line 2 in Algorithm 1 firstly follows PRO to sample a set of preference pairs from k-best-list, and then uniformly samples batch-size pairs from the preference pair set. Line 3 calculates some quantities for CG, and Line 4 calls a CG optimizer 2 and obtains θt+1. At the end of the algorithm, it returns the result θT+1. In this work, we set the maximum number of CG iterations, CGIter, to a small number, which means θt+1 will be returned within CGIter iterations before the CG converges, for faster learning. 4.3 Pre-Training and Post-Training Since Eq. (6) is non-linear, there are many local minimal solutions. Actually, this problem is inherent and is one many works based on the neural network for other NLP tasks such as language model and parsing, also suffer from. And these works empirically show that some pre-training methods, which provide a reasonable initial solution, can improve the performance. Observing the structure of Eq. (5) and the relationships between our model and a log-linear model, we propose the following simple pre-training method. 2In implementation, we call the CG toolkit (Hager and Zhang, 2006), which requires overloading objective and subgradient functions. For easier description, we substitute overloading functions and transform the value of functions in the pseudo-code. 795 If we set W ′ = 0, the model defined in Eq. (5) can be regarded as a log-linear model with features h. Therefore, we pre-train W using MERT or PRO by holding W ′ = 0, and use (W, W ′ = 0, M, B) as an initializer3 for Algorithm 1. Although the above pre-training would provide a reasonable solution, Algorithm 1 may still fall into local minima. We also propose a post-training method: after obtaining a solution with Algorithm 1, we modify this solution slightly to get a new solution. The idea of the post-training method is similar to that of the pre-training method. Suppose θ = (W, W ′, M, B) be the solution obtained from Algorithm 1. If we consider both M and B to be constant, the Eq. (5) goes back to the log-linear model whose features are (h, σ M · h′ + B)  and parameters are (W, W ′). Again, we train the parameters (W, W ′) with MERT or PRO and get the new parameters ( ¯W, ¯W ′). Therefore, we can set θ = ( ¯W, ¯W ′, M, B) as the final solution for Eq. (6). The advantage of post-training is that it optimizes a convex programming derived from the original nonlinear (non-convex) programming in Eq. (6), and thus it may decrease the risk of poor local optima. 4.4 Training Algorithm Algorithm 2 Training Algorithm Input: MaxIter, a dev set, parameters (e.g. λ ) for Algorithm 1 1: Pre-train to obtain θ1 = (W, W ′ = 0, M, B) as the initial parameter 2: for all i such that 1 ≤i ≤MaxIter do 3: Decode with θi on the dev set and merge all k-best-lists 4: Run Algorithm 1 based on the merged kbest-list to obtain θi+1 5: end for 6: Post-train based on θMaxIter+1 to obtain θ Output: θ The whole training for the AdNN model is summarized in Algorithm 2. Given a development set, we first run pre-training to obtain an initial parameter θ1 for Algorithm 1 in line 1. Secondly, it iteratively performs decoding and optimization for MaxIter times in the loop from line 2 to line 5: it decodes with the parameter θi and merges all the 3To avoid the symmetry in the solution, we sample a very small (M, B) from the gaussian distribution in practice instead of setting (M, B) = 0. k-best-lists in line 3; and it then runs Algorithm 1 to optimize θi+1. Thirdly, it runs the post-training to get the result θ based on θMaxIter+1. Of course, we can run post-training after running Algorithm 1 at each iteration i. However, since each pass of post-training (e.g. PRO) takes several hours because of multiple decoding times, we run it only once, at the end of the iterations instead. 5 Experiments and Results 5.1 Experimental Setting We conduct our experiments on the Chinese-toEnglish and Japanese-to-English translation tasks. For the Chinese-to-English task, the training data is the FBIS corpus (news domain) with about 240k sentence pairs; the development set is the NIST02 evaluation data; the development test set is NIST05; and the test datasets are NIST06, and NIST08. For the Japanese-to-English task, the training data with 300k sentence pairs is from the NTCIR-patent task (Fujii et al., 2010); the development set, development test set, and two test sets are averagely extracted from a given development set with 4000 sentences, and these four datasets are called test1, test2, test3 and test4, respectively. We run GIZA++ (Och and Ney, 2000) on the training corpus in both directions (Koehn et al., 2003) to obtain the word alignment for each sentence pair. Using the SRILM Toolkits (Stolcke, 2002) with modified Kneser-Ney smoothing, we train a 4-gram language model for the Chinese-toEnglish task on the Xinhua portion of the English Gigaword corpus and a 4-gram language model for the Japanese-to-English task on the target side of its training data. In our experiments, the translation performances are measured by case-sensitive BLEU4 metric4 (Papineni et al., 2002). The significance testing is performed by paired bootstrap re-sampling (Koehn, 2004b). We use an in-house developed hierarchical phrase-based translation (Chiang, 2005) for our baseline system, which shares the similar setting as Hiero (Chiang, 2005), e.g. beam-size=100, kbest-size=100, and is denoted as L-Hiero to emphasize its log-linear model. We tune L-Hiero with two methods MERT and PRO implemented in the Moses toolkit. On the same experiment settings, the performance of L-Hiero is comparable 4We use mteval-v13a.pl as the evaluation tool(Ref. http://www.itl.nist.gov/iad/mig/tests/mt/2008/scoring.html). 796 Seconds/Sent L-Hiero 1.77 AdNN-Hiero-E 1.88 Table 1: The decoding time comparison on NIST05 between L-Hiero and AdNN-Hiero-E. to that of Moses: on the NIST05 test set, L-Hiero achieves 25.1 BLEU scores and Moses achieves 24.8. Further, we integrate the embedding features (See Section 3.3) into the log-linear model along with the default features as L-Hiero, which is called L-Hiero-E. Since L-Hiero-E has hundreds of features, we use PRO as its tuning toolkit. AdNN-Hiero-E is our implementation of the AddNN model with embedding features, as discussed in Section 3, and it shares the same codebase and settings as L-Hiero. We adopt the following setting for training AdNN-HieroE: u=10; batch-size=1000 and CGiter=3, as referred in (Le et al., 2011), and T=200 in Algorithm 1; the pre-training and post-training methods as PRO; the regularizer λ in Eq. (6) as 10 and 30, and MaxIter as 16 and 20 in Algorithm 2, for Chinese-to-English and Japanese-to-English tasks, respectively. Although there are several parameters in AdNN which may limit its practicability, according to many of our internal studies, most parameters are insensitive to AdNN except λ and MaxIter, which are common in other tuning toolkits such as MIRA and can be tuned5 on a development test dataset. Since both MERT and PRO tuning toolkits involve randomness in their implementations, all BLEU scores reported in the experiments are the average of five tuning runs, as suggested by Clark et al. (2011) for fairer comparisons. For AdNN, we report the averaged scores of five post-training runs, but both pre-training and training are performed only once. 5.2 Results and Analysis As discussed in Section 3, our AdNN-Hiero-E shares the same decoding strategy and pruning method as L-Hiero. When compared with LHiero, decoding for AdNN-Hiero-E only needs additional computational times for the features in the hidden units, i.e. σ M · h′(r) + B  . Since 5For easier tuning, we tuned these two parameters on a given development test set without post-training in Algorithm 2. Chinese-to-English NIST05 NIST06 NIST08 L-Hiero MERT 25.10+ 24.46+ 17.42+ PRO 25.57+ 25.27+ 18.33+ L-Hiero-E PRO 24.80+ 24.46+ 18.20+ AdNN-Hiero-E 26.37 25.93 19.42 Japanese-to-English test2 test3 test4 L-Hiero MERT 24.35+ 25.62+ 23.68+ PRO 24.38+ 25.55+ 23.66+ L-Hiero-E PRO 24.47+ 25.86+ 24.03+ AdNN-Hiero-E 25.14 26.32 24.45 Table 2: The BLEU comparisons between AdNNHiero-E and Log-linear translation models on the Chinese-to-English and Japanese-to-English tasks. + means the comparison is significant over AdNN-Hiero-E with p < 0.05. these features are not dependent on the translation states, they are computed and saved to memory when loading the translation model. During decoding, we just look up these scores instead of re-calculating them on the fly. Therefore, the decoding efficiency of AdNN-Hiero-E is almost the same as that of L-Hiero. As shown in Table 1 the average decoding time for L-Hiero is 1.77 seconds/sentence while that for AdNN-Hiero-E is 1.88 seconds/sentence on the NIST05 test set. Word embedding features can improve the performance on other NLP tasks (Turian et al., 2010), but its effect on log-linear based SMT is not as expected. As shown in Table 2, L-Hiero-E gains little over L-Hiero for the Japanese-to-English task, and even decreases the performance over L-Hiero for the Chinese-to-English task. These results further prove our claim in Section 1, i.e. the loglinear model requires the features to be linear with the model and thus limits its expressive abilities. However, after the single-layer non-linear operator (sigmoid functions) on the embedding features for deep interpretation and representation, AdNNHiero-E gains improvements over both L-Hiero and L-Hiero-E, as depicted in Table 2. In detail, for the Chinese-to-English task, AdNN-Hiero-E improves more than 0.6 BLEU scores over LHiero on both test sets: the gains over L-Hiero tuned with PRO are 0.66 and 1.09 on NIST06 and NIST08, respectively, and the gains over L-Hiero tuned with MERT are even more. Similar results are achieved on the Japanese-to-English task. AdNN-Hiero-E gains about 0.7 BLEU scores on 797 Chinese-to-English NIST05 NIST06 NIST08 L-Hiero 25.57+ 25.27+ 18.33+ AdNN-Hiero-E 26.37 25.93 19.42 AdNN-Hiero-D 26.21 26.07 19.54 Japanese-to-English test2 test3 test4 L-Hiero 24.38 25.55 23.66 AdNN-Hiero-E 25.14+ 26.32+ 24.45+ AdNN-Hiero-D 24.42 25.46 23.73 Table 3: The effect of different feature setting on AdNN model. + means the comparison is significant over AdNN-Hiero-D with p < 0.05. both test sets. In addition, to investigate the effect of different feature settings on AdNN, we alternatively design another setting for h′ in Eq. (5): we use the default features for both h′ and h. In particular, the language model of a rule for h′ is locally calculated without the contexts out of the rule as described in (Chiang, 2007). We call the AdNN model with this setting AdNN-Hiero-D6. Although there are serious overlaps between h and h′ for AdNN-Hiero-D which may limit its generalization abilities, as shown in Table 3, it is still comparable to L-Hiero on the Japanese-to-English task, and significantly outperforms L-Hiero on the Chinese-to-English translation task. To investigate the reason why the gains for AdNN-Hiero-D on the two different translation tasks differ, we calculate the perplexities between the target side of training data and test datasets on both translation tasks. We find that the perplexity of the 4-gram language model for the Chinese-to-English task is 321.73, but that for the Japanese-to-English task is only 81.48. Based on these similarity statistics, we conjecture that the log-linear model does not fit well for difficult translation tasks (e.g. translation task on the news domain). The problem seems to be resolved by simply alternating feature representations through non-linear models, i.e. AddNHiero-D, even with single-layer networks. 6 Related Work Neural networks have achieved widespread attentions in many NLP tasks, e.g. the language 6All its parameters are shared with AdNN-Hiero-E except λ and MaxIter, which are tuned on the development test datasets. model (Bengio et al., 2003); POS, Chunking, NER, and SRL (Collobert and Weston, 2008); Parsing (Collobert and Weston, 2008; Socher et al., 2011); and Machine transliteration (Deselaers et al., 2009). Our work is, of course, highly motivated by these works. Unlike these works, we propose a variant neural network, i.e. additive neural networks, starting from SMT itself and taking both of the model definition and its inference (decoding) together into account. Our variant of neural network, AdNN, is highly related to both additive models (Buja et al., 1989) and generalized additive neural networks (Potts, 1999; Waal and Toit, 2007), in which an additive term is either a linear model or a neural network. Unlike additive models and generalized additive neural networks, our model is decomposable with respect to translation rules rather than its component variables considering the decoding efficiency of machine translation; and it allows its additive terms of neural networks to share the same parameters for a compact structure to avoid sparsity. The idea of the neural network in machine translation has already been pioneered in previous works. Casta˜no et al. (1997) introduced a neural network for example-based machine translation. In particular, Son et al. (2012) and Schwenk (2012) employed a neural network to model the phrase translation probability on the rule level ⟨α, γ⟩instead of the bilingual sentence level ⟨f, e⟩ as in Eq. (5), and thus they did not go beyond the log-linear model for SMT. There are also works which exploit non-linear models in SMT. Duh and Kirchhoff (2008) proposed a boosting re-ranking algorithm using MERT as a week learner to improve the model’s expressive abilities; Sokolov et al. (2012) similarly proposed a boosting re-ranking method from the ranking perspective rather than the classification perspective. Instead of considering the reranking task in SMT, Xiao et al. (2010) employed a boosting method for the system combination in SMT. Unlike their post-processing models (either a re-ranking or a system combination model) in SMT, we propose a non-linear translation model which can be easily incorporated into the existing SMT framework. 7 Conclusion and Future Work In this paper, we go beyond the log-linear model for SMT and propose a novel AdNN based trans798 lation model. Our model overcomes some of the shortcomings suffered by the log-linear model: linearity and the lack of deep interpretation and representation in features. One advantage of our model is that it jointly learns features and tunes the translation model and thus learns features towards the translation evaluation metric. Additionally, the decoding of our model is as efficient as that of the log-linear model. For Chinese-toEnglish and Japanese-to-English translation tasks, our model significantly outperforms the log-linear model, with the help of word embedding. We plan to explore more work on the additive neural networks in the future. For example, we will train word embedding matrices for source and target languages from a larger corpus, and take into consideration the bilingual information, for instance, word alignment; the multi-layer neural network within the additive neural networks will be also investigated in addition to the single-layer neural network; and we will test our method on other translation tasks with larger training data as well. Acknowledgments We would like to thank our colleagues in both HIT and NICT for insightful discussions, Tomas Mikolov for the helpful discussion about the word embedding in RNNLM, and three anonymous reviewers for many invaluable comments and suggestions to improve our paper. This work is supported by National Natural Science Foundation of China (61173073, 61100093, 61073130, 61272384), the Key Project of the National High Technology Research and Development Program of China (2011AA01A207), and the Fundamental Research Funds for Central Universities (HIT.NSRIF.2013065). References Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic language model. J. Mach. Learn. Res., 3:1137–1155, March. Christopher M. Bishop. 1995. Neural Networks for Pattern Recognition. Oxford University Press, Inc., New York, NY, USA. Peter F. Brown, Vincent J. Della Pietra, Stephen A. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: parameter estimation. Comput. Linguist., 19:263–311, June. Andreas Buja, Trevor Hastie, and Robert Tibshirani. 1989. Linear smoothers and additive models. The Annals of Statistics, 17:453–510. M. Asuncin Casta˜no, Francisco Casacuberta, and Enrique Vidal. 1997. Machine translation using neural networks and finite-state models. In TMI, pages 160–167. David Chiang, Yuval Marton, and Philip Resnik. 2008. Online large-margin training of syntactic and structural translation features. In Proc. of EMNLP. ACL. David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, ACL ’05, pages 263– 270, Stroudsburg, PA, USA. Association for Computational Linguistics. David Chiang. 2007. Hierarchical phrase-based translation. Comput. Linguist., 33(2):201–228, June. Jonathan H. Clark, Chris Dyer, Alon Lavie, and Noah A. Smith. 2011. Better hypothesis testing for statistical machine translation: controlling for optimizer instability. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers - Volume 2, HLT ’11, pages 176–181, Stroudsburg, PA, USA. Association for Computational Linguistics. R. Collobert and J. Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In International Conference on Machine Learning, ICML. R. Collobert. 2011. Deep learning for efficient discriminative parsing. In AISTATS. Thomas Deselaers, Saˇsa Hasan, Oliver Bender, and Hermann Ney. 2009. A deep learning approach to machine transliteration. In Proceedings of the Fourth Workshop on Statistical Machine Translation, StatMT ’09, pages 233–241, Stroudsburg, PA, USA. Association for Computational Linguistics. Kevin Duh and Katrin Kirchhoff. 2008. Beyond loglinear models: Boosted minimum error rate training for n-best re-ranking. In Proceedings of ACL-08: HLT, Short Papers, pages 37–40, Columbus, Ohio, June. Association for Computational Linguistics. Atsushi Fujii, Masao Utiyama, Mikio Yamamoto, and Takehito Utsuro. 2010. Overview of the patent translation task at the ntcir-8 workshop. In In Proceedings of the 8th NTCIR Workshop Meeting on Evaluation of Information Access Technologies: Information Retrieval, Question Answering and Cross-lingual Information Access, pages 293– 302. 799 William W. Hager and Hongchao Zhang. 2006. Algorithm 851: Cg descent, a conjugate gradient method with guaranteed descent. ACM Trans. Math. Softw., 32(1):113–137, March. Mark Hopkins and Jonathan May. 2011. Tuning as ranking. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1352–1362, Edinburgh, Scotland, UK., July. Association for Computational Linguistics. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proc. of HLT-NAACL. ACL. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondˇrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, ACL ’07, pages 177–180, Stroudsburg, PA, USA. Association for Computational Linguistics. Philipp Koehn. 2004a. Pharaoh: A beam search decoder for phrase-based statistical machine translation models. In AMTA. Philipp Koehn. 2004b. Statistical significance tests for machine translation evaluation. In Proc. of EMNLP. ACL. Quoc V. Le, Jiquan Ngiam, Adam Coates, Ahbik Lahiri, Bobby Prochnow, and Andrew Y. Ng. 2011. On optimization methods for deep learning. In ICML, pages 265–272. Tomas Mikolov, Martin Karafi´at, Lukas Burget, Jan Cernock´y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In INTERSPEECH, pages 1045–1048. Patrick Nguyen, Milind Mahajan, and Xiaodong He. 2007. Training non-parametric features for statistical machine translation. In Proceedings of the Second Workshop on Statistical Machine Translation, pages 72–79, Prague, Czech Republic, June. Association for Computational Linguistics. Franz Josef Och and Hermann Ney. 2000. Improved statistical alignment models. In Proceedings of the 38th Annual Meeting on Association for Computational Linguistics, ACL ’00, pages 440–447, Stroudsburg, PA, USA. Association for Computational Linguistics. Franz Josef Och and Hermann Ney. 2002. Discriminative training and maximum entropy models for statistical machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL ’02, pages 295–302, Stroudsburg, PA, USA. Association for Computational Linguistics. Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 160–167, Sapporo, Japan, July. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA, July. Association for Computational Linguistics. William J. E. Potts. 1999. Generalized additive neural networks. In Proceedings of the fifth ACM SIGKDD international conference on Knowledge discovery and data mining, KDD ’99, pages 194–200, New York, NY, USA. ACM. Holger Schwenk. 2012. Continuous space translation models for phrase-based statistical machine translation. In Proceedings of the 24th International Conference on Computational Linguistics, COLING ’12, Mumbai, India. Association for Computational Linguistics. Richard Socher, Cliff Chiung-Yu Lin, Andrew Y. Ng, and Christopher D. Manning. 2011. Parsing Natural Scenes and Natural Language with Recursive Neural Networks. In Proceedings of the 26th International Conference on Machine Learning (ICML). A. Sokolov, G. Wisniewski, and F. Yvon. 2012. Nonlinear n-best list reranking with few features. In AMTA, San Diego, USA. Le Hai Son, Alexandre Allauzen, and Franc¸ois Yvon. 2012. Continuous space translation models with neural networks. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT ’12, pages 39– 48, Stroudsburg, PA, USA. Association for Computational Linguistics. Andreas Stolcke. 2002. Srilm - an extensible language modeling toolkit. In Proc. of ICSLP. Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: a simple and general method for semi-supervised learning. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL ’10, pages 384–394, Stroudsburg, PA, USA. Association for Computational Linguistics. D. A. de Waal and J. V. du Toit. 2007. Generalized additive models from a neural network perspective. In Proceedings of the Seventh IEEE International Conference on Data Mining Workshops, ICDMW ’07, pages 265–270, Washington, DC, USA. IEEE Computer Society. 800 Taro Watanabe, Jun Suzuki, Hajime Tsukada, and Hideki Isozaki. 2007. Online large-margin training for statistical machine translation. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLPCoNLL), pages 764–773, Prague, Czech Republic, June. Association for Computational Linguistics. Tong Xiao, Jingbo Zhu, Muhua Zhu, and Huizhen Wang. 2010. Boosting-based system combination for machine translation. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL ’10, pages 739–748, Stroudsburg, PA, USA. Association for Computational Linguistics. 801
2013
78
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 802–810, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Hierarchical Phrase Table Combination for Machine Translation Conghui Zhu1 Taro Watanabe2 Eiichiro Sumita2 Tiejun Zhao1 1School of Computer Science and Technology Harbin Institute of Technology (HIT), Harbin, China 2National Institute of Information and Communication Technology 3-5 Hikari-dai, Seika-cho, Soraku-gun, Kyoto, Japan {chzhu,tjzhao}@mtlab.hit.edu.cn {taro.watanabe,Sumita}@nict.go.jp Abstract Typical statistical machine translation systems are batch trained with a given training data and their performances are largely influenced by the amount of data. With the growth of the available data across different domains, it is computationally demanding to perform batch training every time when new data comes. In face of the problem, we propose an efficient phrase table combination method. In particular, we train a Bayesian phrasal inversion transduction grammars for each domain separately. The learned phrase tables are hierarchically combined as if they are drawn from a hierarchical Pitman-Yor process. The performance measured by BLEU is at least as comparable to the traditional batch training method. Furthermore, each phrase table is trained separately in each domain, and while computational overhead is significantly reduced by training them in parallel. 1 Introduction Statistical machine translation (SMT) systems usually achieve ’crowd-sourced’ improvements with batch training. Phrase pair extraction, the key step to discover translation knowledge, heavily relies on the scale of training data. Typically, the more parallel corpora used, the more phrase pairs and more accurate parameters will be learned, which can obviously be beneficial to improving translation performances. Today, more parallel sentences are drawn from divergent domains, and the size keeps growing. Consequently, how to effectively use those data and improve translation performance becomes a challenging issue. This joint work was done while the first author visited NICT. Batch retraining is not acceptable for this case, since it demands serious computational overhead when training on a large data set, and it requires us to re-train every time new training data is available. Even if we can handle the large computation cost, improvement is not guaranteed every time we perform batch tuning on the newly updated training data obtained from divergent domains. Traditional domain adaption methods for SMT are also not adequate in this scenario. Most of them have been proposed in order to make translation systems perform better for resource-scarce domains when most training data comes from resourcerich domains, and ignore performance on a more generic domain without domain bias (Wang et al., 2012). As an alternative, incremental learning may resolve the gap by incrementally adding data sentence-by-sentence into the training data. Since SMT systems trend to employ very large scale training data for translation knowledge extraction, updating several sentence pairs each time will be annihilated in the existing corpus. This paper proposes a new phrase table combination method. First, phrase pairs are extracted from each domain without interfering with other domains. In particular, we employ the nonparametric Bayesian phrasal inversion transduction grammar (ITG) of Neubig et al. (2011) to perform phrase table extraction. Second, extracted phrase tables are combined as if they are drawn from a hierarchical Pitman-Yor process, in which the phrase tables represented as tables in the Chinese restaurant process (CRP) are hierarchically chained by treating each of the previously learned phrase tables as prior to the current one. Thus, we can easily update the chain of phrase tables by appending the newly extracted phrase table and by treating the chain of the previous ones as its prior. Experiment results indicate that our method can achieve better translation performance when there exists a large divergence in domains, and can 802 achieve at least comparable results to batch training methods, with a significantly less computational overhead. The rest of the paper is organized as follows. In Section 2, we introduce related work. In section 3, we briefly describe the translation model with phrasal ITGs and Pitman-Yor process. In section 4, we explain our hierarchical combination approach and give experiment results in section 5. We conclude the paper in the last section. 2 Related Work Bilingual phrases are cornerstones for phrasebased SMT systems (Och and Ney, 2004; Koehn et al., 2003; Chiang, 2005) and existing translation systems often get ‘crowd-sourced’ improvements (Levenberg et al., 2010). A number of approaches have been proposed to make use of the full potential of the available parallel sentences from various domains, such as domain adaptation and incremental learning for SMT. The translation model and language model are primary components in SMT. Previous work proved successful in the use of large-scale data for language models from diverse domains (Brants et al., 2007; Schwenk and Koehn, 2008). Alternatively, the language model is incrementally updated by using a succinct data structure with a interpolation technique (Levenberg and Osborne, 2009; Levenberg et al., 2011). In the case of the previous work on translation modeling, mixed methods have been investigated for domain adaptation in SMT by adding domain information as additional labels to the original phrase table (Foster and Kuhn, 2007). Under this framework, the training data is first divided into several parts, and phase pairs are extracted with some sub-domain features. Then all the phrase pairs and features are tuned together with different weights during decoding. As a way to choose the right domain for the domain adaption, a classifier-based method and a feature-based method have been proposed. Classification-based methods must at least add an explicit label to indicate which domain the current phrase pair comes from. This is traditionally done with an automatic domain classifier, and each input sentence is classified into its corresponding domain (Xu et al., 2007). As an alternative to the classification-based approach, Wang et al. (2012) employed a featurebased approach, in which phrase pairs are enriched by a feature set to potentially reflect the domain information. The similarity calculated by a information retrieval system between the training subset and the test set is used as a feature for each parallel sentence (Lu et al., 2007). Monolingual topic information is taken as a new feature for a domain adaptive translation model and tuned on the development set (Su et al., 2012). Regardless of underlying methods, either classifier-based or featurebased method, the performance of current domain adaptive phrase extraction methods is more sensitive to the development set selection. Usually the domain similar to a given development data is usually assigned higher weights. Incremental learning in which new parallel sentences are incrementally updated to the training data is employed for SMT. Compared to traditional frequent batch oriented methods, an online EM algorithm and active learning are applied to phrase pair extraction and achieves almost comparable translation performance with less computational overhead (Levenberg et al., 2010; Gonz´alezRubio et al., 2011). However, their methods usually require numbers of hyperparameters, such as mini-batch size, step size, or human judgment to determine the quality of phrases, and still rely on a heuristic phrase extraction method in each phrase table update. 3 Phrase Pair Extraction with Unsupervised Phrasal ITGs Recently, phrase alignment with ITGs (Cherry and Lin, 2007; Zhang et al., 2008; Blunsom et al., 2008) and parameter estimation with Gibbs sampling (DeNero and Klein, 2008; Blunsom and Cohn, 2010) are popular. Here, we employ a method proposed by Neubig et al. (2011), which uses parametric Bayesian inference with the phrasal ITGs (Wu, 1997). It can achieve comparable translation accuracy with a much smaller phrase table than the traditional GIZA++ and heuristic phrase extraction methods. It has also been proved successful in adjusting the phrase length granularity by applying character-based SMT with more sophisticated inference (Neubig et al., 2012). ITG is a synchronous grammar formalism which analyzes bilingual text by introducing inverted rules, and each ITG derivation corresponds to the alignment of a sentence pair (Wu, 1997). Translation probabilities of ITG phrasal align803 ments can be estimated in polynomial time by slightly limiting word reordering (DeNero and Klein, 2008). More formally, P ⟨e, f⟩; θx, θt  are the probability of phrase pairs ⟨e, f⟩, which is parameterized by a phrase pair distribution θt and a symbol distribution θx. θx is a Dirichlet prior, and θt is estimated with the Pitman-Yor process (Pitman and Yor, 1997; Teh, 2006), which is expressed as θt ∼PY d, s, Pdac  (1) where d is the discount parameter, s is the strength parameter, and , and Pdac is a prior probability which acts as a fallback probability when a phrase pair is not in the model. Under this model, the probability for a phrase pair found in a bilingual corpus ⟨E, F⟩can be represented by the following equation using the Chinese restaurant process (Teh, 2006): P ⟨ei, fi⟩; ⟨E, F⟩  = 1 C + s(ci −d × ti)+ 1 C + s(s + d × T) × Pdac(⟨ei, fi⟩) (2) where 1. ci and ti are the customer and table count of the ith phrase pair ⟨ei, fi⟩found in a bilingual corpus ⟨E, F⟩; 2. C and T are the total customer and table count in corpus ⟨E, F⟩; 3. d and s are the discount and strengthen hyperparameters. The prior probability Pdac is recursively defined by breaking a longer phrase pair into two through the recursive ITG’s generative story as follows (Neubig et al., 2011): 1. Generate symbol x from Px(x; θx) with three possible values: Base, REG, or INV . 2. Depending on the value of x take the following actions. a. If x = Base, generate a new phrase pair directly from Pbase. b. If x = REG, generate ⟨e1, f1⟩and ⟨e2, f2⟩from P ⟨e, f⟩; θx, θt  , and concatenate them into a single phrase pair ⟨e1e2, f1f2⟩. Figure 1: A word alignment (a), and its hierarchical derivation (b). c. If x = INV , follow a similar process as b, but concatenate f1 and f2 in reverse order ⟨e1e2, f2f1⟩. Note that the Pdac is recursively defined through the binary branched P, which in turns employs Pdac as a prior probability. Pbase is a base measure defined as a combination of the IBM Models in two directions and the unigram language models in both sides. Inference is carried out by a heuristic beam search based block sampling with an efficient look ahead for a faster convergence (Neubig et al., 2012). Compared to GIZA++ with heuristic phrase extraction, the Bayesian phrasal ITG can achieve competitive accuracy under a smaller phrase table size. Further, the fallback model can incorporate phrases of all granularity by following the ITG’s recursive definition. Figure 1 (b) illustrates an example of the phrasal ITG derivation for word alignment in Figure 1 (a) in which a bilingual sentence pair is recursively divided into two through the recursively defined generative story. 4 Hierarchical Phrase Table Combination We propose a new phrase table combination method, in which individually learned phrase table are hierarchically chained through a hierarchical Pitman-Yor process. Firstly, we assume that the whole training data ⟨E, F⟩can be split into J domains, {⟨E1, F 1⟩, . . . , ⟨EJ, F J⟩}. Then phrase pairs are 804 Figure 2: A hierarchical phrase table combination (a), and a basic unit of a Chinese restaurant process with K tables and N customers. extracted from each domain j (1 ≤j ≤J) separately with the method introduced in Section 3. In traditional domain adaptation approaches, phrase pairs are extracted together with their probabilities and/or frequencies so that the extracted phrase pairs are merged uniformly or after scaling. In this work, we extract the table counts for each phrase pair under the Chinese restaurant process given in Section 3. In Figure 2 (b), a CRP is illustrated which has K tables and N customers with each chair representing a customer. Meanwhile there are two parameters, discount and strength for each domain similar to the ones in Equation (1). Our proposed hierarchical phrase table combination can be formally expressed as following: θ1 ∼PY (d1, s1, P 2) · · · · · · θj ∼PY (dj, sj, P j+1) · · · · · · θJ ∼PY dJ, sJ, P J base  (3) Here the (j + 1)th layer hierarchical Pitman-Yor process is employed as a base measure for the jth layer hierarchical Pitman-Yor process. The hierarchical chain is terminated by the base measure from the Jth domain P J base. The hierarchical structure is illustrated in Figure 2 (a) in which the solid lines implies a fall back using the table counts from the subsequent domains, and the dotted lines means the final fallback to the base measure P J base. When we query a probability of a phrase pair ⟨e, f⟩, we first query the probability of the first layer P 1(⟨e, f⟩). If ⟨e, f⟩is not in the model, we will fallback to the next level of P 2(⟨e, f⟩). This process continues until we reach the Jth base measure of P J(⟨e, f⟩). Each fallback can be viewed as a translation knowledge integration process between subsequent domains. For example in Figure 2 (a), the ith phrase pair ⟨ei, fi⟩appears only in the domain 1 and domain 2, so its translation probability can be calculated by substituting Equation (3) with Equation (2): P ⟨ei, fi⟩; ⟨E, F⟩  = 1 C1 + s1 (c1 i −d1 × t1 i ) + s1 + d1 × T 1 (C1 + s1) × (C2 + s2)(c2 i −d2 × t2 i ) + J Y j=1 sj + dj × T j Cj + sj  × P J base(⟨ei, fi⟩) (4) where the superscript indicates the domain for the corresponding counts, i.e. cj i for the customer count in the jth domain. The first term in Equation (4) is the phrase probability from the first domain, and the second one comes from the second domain, but weighted by the fallback weight of the 1st domain. Since ⟨ei, fi⟩does not appear in the rest of the layers, the last term is taken from all the fallback weight from the second layer to the Jth layer with the final P J base. All the parameters θj and hyperparameters dj and sj, are obtained by learning on the jth domain. Returning the hyperparameters again when cascading another domain may improve the performance of the combination weight, but we will leave it for future work. The hierarchical process can be viewed as an instance of adapted integration of translation knowledge from each sub-domain. 805 Algorithm 1 Translation Probabilities Estimation Input: cj i, tj i, P j base, Cj, T j, dj and sj Output: The translation probabilities for each pair 1: for all phrase pair ⟨ei, fi⟩do 2: Initialize the P(⟨ei, fi⟩) = 0 and wi = 1 3: for all domain ⟨Ej, Fj⟩such that 1 ⩽j ⩽ J −1 do 4: if ⟨ei, fi⟩∈⟨Ej, Fj⟩then 5: P(⟨ei, fi⟩) += wi × (Cj i −dj × tj i)/(Cj + sj) 6: end if 7: wi = wi × (sj + dj × T j)/(Cj + sj) 8: end for 9: P(⟨ei, fi⟩) += wi × (CJ i −dJ × tJ i + (sJ + dJ × T J) × P J base(⟨ei, fi⟩))/(CJ + sJ) 10: end for Our approach has several advantages. First, each phrase pair extraction can concentrate on a small portion of domain-specific data without interfering with other domains. Since no tuning stage is involved in the hierarchical combination, we can easily include a new phrase table from a new domain by simply chaining them together. Second, phrase pair phrase extraction in each domain is completely independent, so it is easy to parallelize in a situation where the training data is too large to fit into a small amount of memory. Finally, new domains can be integrated incrementally. When we encounter a new domain, and if a phrase pair is completely new in terms of the model, the phrase pair is simply appended to the current model, and computed without the fallback probabilities, since otherwise, the phrase pair would be boosted by the fallback probabilities. Pitman-Yor process is also employed in n-gram language models which are hierarchically represented through the hierarchical Pitman-Yor process with switch priors to integrate different domains in all the levels (Wood and Teh, 2009). Our work incrementally combines the models from different domains by directly employing the hierarchical process through the base measures. 5 Experiment We evaluate the proposed approach on the Chinese-to-English translation task with three data sets with different scales. Data set Corpus #sent. pairs IWSLT HIT 52, 603 BTEC 19, 975 Domain 1 47, 993 Domain 2 30, 272 FBIS Domain 3 49, 509 Domain 4 38, 228 Domain 5 55, 913 News 221, 915 News 95, 593 LDC Magazine 98, 335 Magazine 254, 488 Finance 86, 112 Table 1: The sentence pairs used in each data set. 5.1 Experiment Setup The first data set comes from the IWSLT2012 OLYMPICS task consisting of two training sets: the HIT corpus, which is closely related to the Beijing 2008 Olympic Games, and the BTEC corpus, which is a multilingual speech corpus containing tourism-related sentences. The second data set, the FBIS corpus, is a collection of news articles and does not have domain information itself, so a Latent Dirichlet Allocation (LDA) tool, PLDA1, is used to divide the whole corpus into 5 different sub-domains according to the concatenation of the source side and target side as a single sentence (Liu et al., 2011). The third data set is composed of 5 corpora2 from LDC with various domains, including news, magazine, and finance. The details are shown in Table 1. In order to evaluate our approach, four phrase pair extraction methods are performed: 1. GIZA-linear: Phase pairs are extracted in each domain by GIZA++ (Och and Ney, 2003) and the ”grow-diag-final-and” method with a maximum length 7. The phrase tables from various domains are linearly combined by averaging the feature values. 2. Pialign-linear: Similar to GIZA-linear, but we employed the phrasal ITG method described in Section 3 using the pialign toolkit 3 (Neubig et 1http://code.google.com/p/plda/ 2In particular, they come from LDC catalog number: LDC2002E18, LDC2002E58, LDC2003E14, LDC2005E47, LDC2006E26, in this order. 3http://www.phontron.com/pialign/ 806 Methods IWSLT FBIS LDC BLEU Size BLEU Size BLEU Size GIZA-linear 19.222 1,200,877 29.342 15,369,028 30.67 77,927,347 Pialign-linear 19.534 876,059 29.858 7,235,342 31.12 28,877,149 GIZA-batch 19.616 1,185,255 31.38 13,737,258 32.06 63,606,056 Pialign-batch 19.506 841,931 31.104 6,459,200 Pialign-adaptive 19.624 841,931 30.926 6,459,200 Hier-combin 20.32 876,059 31.29 7,235,342 32.03 28,877,149 Table 2: BLEU scores and phrase table size by alignment method and probabilities estimation method. Pialign was run with five samples. Because of computational overhead, the baseline Pialign-batch and Pialign-adaptive were not run on the largest data set. al., 2011). Extracted phrase pairs are linearly combined by averaging the feature values. 3. GIZA-batch: Instead of splitting into each domain, the data set is merged as a single corpus and then a heuristic GZA-based phrase extraction is performed, similar as GIZA-linear. 4. Pialign-batch: Similar to the GIZA-batch, a single model is estimated from a single, merged corpus. Since pialign cannot handle large data, we did not experiment on the largest LDC data set. 5. Pialign-adaptive: Alignment and phrase pairs extraction are same to Pialign-batch, while translation probabilities are estimated by the adaptive method with monolingual topic information (Su et al., 2012). The method established the relationship between the out-ofdomain bilingual corpus and in-domain monolingual corpora via topic distribution to estimate the translation probability. ø(˜e| ˜f) = X tf ø(˜e, tf| ˜f) = X tf ø(˜e|tf, ˜f) · P(tf| ˜f) (5) where ø(˜e|tf, ˜f) is the probability of translating ˜f into ˜e given the source-side topic ˜f , P(tf| ˜f) is the phrase-topic distribution of f. The method we proposed is named Hiercombin. It extracts phrase pairs in the same way as the Pialign-linear. In the phrase table combination process, the translation probability of each phrase pair is estimated by the Hier-combin and the other features are also linearly combined by averaging the feature values. Pialign is used with default parameters. The parameter ’samps’ is set to 5, which indicates 5 samples are generated for a sentence pair. The IWSLT data consists of roughly 2, 000 sentences and 3, 000 sentences each from the HIT and BTEC for development purposes, and the test data consists of 1, 000 sentences. For the FBIS and LDC task, we used NIST MT 2002 and 2004 for development and testing purposes, consisting of 878 and 1, 788 sentences respectively. We employ Moses, an open-source toolkit for our experiment (Koehn et al., 2007). SRILM Toolkit (Stolcke, 2002) is employed to train 4-gram language models on the Xinhua portion of Gigaword corpus, while for the IWLST2012 data set, only its training set is used. We use batch-MIRA (Cherry and Foster, 2012) to tune the weight for each feature and translation quality is evaluated by the case-insensitive BLEU-4 metric (Papineni et al., 2002). The BLEU scores reported in this paper are the average of 5 independent runs of independent batch-MIRA weight training, as suggested by (Clark et al., 2011). 5.2 Result and Analysis 5.2.1 Performances of various extraction methods We carry out a series of experiments to evaluate translation performance. The results are listed in Table 2. Our method significantly outperforms the baseline Pialign-linear. Except for the translation probabilities, the phrase pairs of two methods are exactly same, so the number of phrase pairs are equal in the two methods. Further more, the performance of the baseline Pialign-adaptive is also higher than the baseline Pialign-linear’s and lower than ours. This proves that the adaptive method 807 Methods Task Time(minute) Batch Retraining 536.9 Hierarchical Parallel Extraction 122.55 Combination Integrating 1.5 Total 124.05 Table 3: Minutes used for alignment and phase pair extraction in the FBIS data set. with monolingual topic information is useful in the tasks, but our approach with the hierarchical Pitman-Yor process can estimate more accurate translation probabilities based on all the data from various domains. Compared with the GIZA-batch, our approach achieves competitive performance with a much smaller phrase table. The number of phase pairs generated by our method is only 73.9%, 52.7%, and 45.4% of the GIZA-batch’s respectively. In the IWLST2012 data set, there is a huge difference gap between the HIT corpus and the BTEC corpus, and our method gains 0.814 BLEU improvement. While the FBIS data set is artificially divided and no clear human assigned differences among subdomains, our method loses 0.09 BLEU. In the framework we proposed, phrase pairs are extracted from each domain completely independent of each other, so those tasks can be executed on different machines, at different times, and of course in parallel when we assume that the domains are not incrementally added in the training data. The runtime of our approach and the batch-based ITGs sampling method in the FBIS data set is listed in Table 3 measured on a 2.7 GHz E5-2680 CPU and 128 Gigabyte memory. When comparing the hier-combin with the pialign-batch, the BLEU scores are a little higher while the time spent for training is much lower, almost one quarter of the pialign-batch. Even the performance of the pialign-linear is better than the Baseline GIZA-linear’s, which means that phrase pair extraction with hierarchical phrasal ITGs and sampling is more suitable for domain adaptation tasks than the combination GIZA++ and a heuristic method. Generally, the hierarchical combination method exploits the nature of a hierarchical Pitman-Yor process and gains the advantage of its smoothing effect, and our approach can incrementally generate a succinct phrase table based on all the data from various domains with more accurate probabilities. Traditional SMT phrase pair extraction is batch-based, while our method has no obvious shortcomings in translation accuracy, not to mention efficiency. 5.2.2 Effect of Integration Order Here, we evaluate whether our hierarchical combination is sensitive to the order of the domains when forming a hierarchical structure. Through Equation (3), in our experiments, we chained the domains in the order listed in Table 1, which is in almost chronological order. Table 4 shows the BLEU scores for the three data sets, in which the order of combining phrase tables from each domain is alternated in the ascending and descending of the similarity to the test data. The similarity between the data from each domain and the test data is calculated using the perplexity measure with 5gram language model. The model learned from the domain more similar to the test data is placed in the front so that it can largely influence the parameter computation with less backoff effects. There is a big difference between the two opposite order in IWSLT 2012 data set, in which more than one point of decline in BLEU score when taking the BTEC corpus as the first layer. Note that the perplexity of BTEC was 344.589 while that of HIT was 107.788. The result may indicate that our hierarchical phrase combination method is sensitive to the integration order when the training data is small and there exists large gap in the similarity. However, if most domains are similar (FBIS data set) or if there are enough parallel sentence pairs (NIST data set) in each domain, then the translation performances are almost similar even with the opposite integrating orders. IWSLT FBIS LDC Descending 20.154 30.491 31.268 Ascending 19.066 30.388 31.254 Difference 1.088 0.103 0.014 Table 4: BLEU scores for the hierarchical model with different integrating orders. Here Pialign was run without multi-samples. 6 Conclusion and Future Work In this paper, we present a novel hierarchical phrase table combination method for SMT, which can exploit more of the potential from all of data coming from various fields and generate a suc808 cinct phrase table with more accurate translation probabilities. The method assumes that a combined model is derived from a hierarchical PitmanYor process with each prior learned separately in each domain, and achieves BLEU scores competitive with traditional batch-based ones. Meanwhile, the framework has natural characteristics for parallel and incremental phrase pair extraction. The experiment results on three different data sets indicate the effectiveness of our approach. In future work, we will also introduce incremental learning for phase pair extraction inside a domain, which means using the current translation probabilities already obtained as the base measure of sampling parameters for the upcoming domain. Furthermore, we will investigate any tradeoffs between the accuracy of the probability estimation and the coverage of phrase pairs. Acknowledgments We would like to thank our colleagues in both HIT and NICT for insightful discussions, and three anonymous reviewers for many invaluable comments and suggestions to improve our paper. This work is supported by National Natural Science Foundation of China (61100093, 61173073, 61073130, 61272384), and the Key Project of the National High Technology Research and Development Program of China (2011AA01A207). References Phil Blunsom and Trevor Cohn. 2010. Inducing synchronous grammars with slice sampling. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 238–241, Los Angeles, California, June. Association for Computational Linguistics. Phil Blunsom, Trevor Cohn, and Miles Osborne. 2008. A discriminative latent variable model for statistical machine translation. In Proceedings of ACL, pages 200–208, Columbus, Ohio, June. Association for Computational Linguistics. Thorsten Brants, Ashok C. Popat, Peng Xu, Franz J. Och, and Jeffrey Dean. 2007. Large language models in machine translation. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 858–867, Prague, Czech Republic, June. Association for Computational Linguistics. Colin Cherry and George Foster. 2012. Batch tuning strategies for statistical machine translation. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 427–436, Montr´eal, Canada, June. Association for Computational Linguistics. Colin Cherry and Dekang Lin. 2007. Inversion transduction grammar for joint phrasal translation modeling. In Proceedings of SSST, NAACL-HLT 2007/AMTA Workshop on Syntax and Structure in Statistical Translation, pages 17–24. David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, ACL ’05, pages 263– 270, Stroudsburg, PA, USA. Association for Computational Linguistics. Jonathan H. Clark, Chris Dyer, Alon Lavie, and Noah A. Smith. 2011. Better hypothesis testing for statistical machine translation: controlling for optimizer instability. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers - Volume 2, HLT ’11, pages 176–181, Stroudsburg, PA, USA. Association for Computational Linguistics. John DeNero and Dan Klein. 2008. The complexity of phrase alignment problems. In Proceedings of ACL-08: HLT, Short Papers, pages 25–28, Columbus, Ohio, June. Association for Computational Linguistics. George Foster and Roland Kuhn. 2007. Mixturemodel adaptation for smt. In Proceedings of the Second Workshop on Statistical Machine Translation, pages 128–135. Jes´us Gonz´alez-Rubio, Daniel Ortiz-Martinez, and Francisco Casacuberta. 2011. Fast incremental active learning for statistical machine translation. AVANCES EN INTELIGENCIA ARTIFICIAL. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proc. of HLT-NAACL, pages 45–54. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondˇrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, ACL ’07, pages 177–180, Stroudsburg, PA, USA. Association for Computational Linguistics. Abby Levenberg and Miles Osborne. 2009. Streambased randomised language models for smt. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 2Volume 2, pages 756–764. Association for Computational Linguistics. 809 Abby Levenberg, Chris Callison-Burch, and Miles Osborne. 2010. Stream-based translation models for statistical machine translation. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT ’10, pages 394– 402, Stroudsburg, PA, USA. Association for Computational Linguistics. Abby Levenberg, Miles Osborne, and David Matthews. 2011. Multiple-stream language models for statistical machine translation. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 177–186, Edinburgh, Scotland, July. Association for Computational Linguistics. Zhiyuan Liu, Yuzhou Zhang, Edward Y Chang, and Maosong Sun. 2011. Plda+: Parallel latent dirichlet allocation with data placement and pipeline processing. ACM Transactions on Intelligent Systems and Technology (TIST), 2(3):1–18. Yajuan Lu, Jin Huang, and Qun Liu. 2007. Improving statistical machine translation performance by training data selection and optimization. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 343–350, Prague, Czech Republic, June. Association for Computational Linguistics. Graham Neubig, Taro Watanabe, Eiichiro Sumita, Shinsuke Mori, and Tatsuya Kawahara. 2011. An unsupervised model for joint phrase alignment and extraction. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 632–641, Portland, Oregon, USA, June. Association for Computational Linguistics. Graham Neubig, Taro Watanabe, Shinsuke Mori, and Tatsuya Kawahara. 2012. Machine translation without words through substring alignment. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 165–174, Jeju Island, Korea, July. Association for Computational Linguistics. Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational linguistics, 29(1):19–51. Franz Josef Och and Hermann Ney. 2004. The alignment template approach to statistical machine translation. Comput. Linguist., 30(4):417–449, December. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA, July. Association for Computational Linguistics. Jim Pitman and Marc Yor. 1997. The two-parameter poisson-dirichlet distribution derived from a stable subordinator. The Annals of Probability, 25(2):855– 900. Holger Schwenk and Philipp Koehn. 2008. Large and diverse language models for statistical machine translation. In International Joint Conference on Natural Language Processing, pages 661–668. Andreas Stolcke. 2002. Srilm - an extensible language modeling toolkit. In Proc. of ICSLP. Jinsong Su, Hua Wu, Haifeng Wang, Yidong Chen, Xiaodong Shi, Huailin Dong, and Qun Liu. 2012. Translation model adaptation for statistical machine translation with monolingual topic information. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 459–468. Yee Whye Teh. 2006. A hierarchical bayesian language model based on pitman-yor processes. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, pages 985–992. Association for Computational Linguistics. Wei Wang, Klaus Macherey, Wolfgang Macherey, Franz Och, and Peng Xu. 2012. Improved domain adaptation for statistical machine translation. In Proceedings of the Conference of the Association for Machine translation, Americas. F. Wood and Y. W. Teh. 2009. A hierarchical nonparametric Bayesian approach to statistical language model domain adaptation. In Proceedings of the International Conference on Artificial Intelligence and Statistics, volume 12. Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational linguistics, 23(3):377–403. Jia Xu, Yonggang Deng, Yuqing Gao, and Hermann Ney. 2007. Domain dependent statistical machine translation. In Proceedings of the MT Summit XI. Hao Zhang, Chris Quirk, Robert C. Moore, and Daniel Gildea. 2008. Bayesian learning of noncompositional phrases with synchronous parsing. In Proceedings of ACL-08: HLT, pages 97–105, Columbus, Ohio, June. Association for Computational Linguistics. 810
2013
79
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 73–82, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Joint Event Extraction via Structured Prediction with Global Features Qi Li Heng Ji Liang Huang Departments of Computer Science and Linguistics The Graduate Center and Queens College City University of New York New York, NY 10016, USA {liqiearth, hengjicuny, liang.huang.sh}@gmail.com Abstract Traditional approaches to the task of ACE event extraction usually rely on sequential pipelines with multiple stages, which suffer from error propagation since event triggers and arguments are predicted in isolation by independent local classifiers. By contrast, we propose a joint framework based on structured prediction which extracts triggers and arguments together so that the local predictions can be mutually improved. In addition, we propose to incorporate global features which explicitly capture the dependencies of multiple triggers and arguments. Experimental results show that our joint approach with local features outperforms the pipelined baseline, and adding global features further improves the performance significantly. Our approach advances state-ofthe-art sentence-level event extraction, and even outperforms previous argument labeling methods which use external knowledge from other sentences and documents. 1 Introduction Event extraction is an important and challenging task in Information Extraction (IE), which aims to discover event triggers with specific types and their arguments. Most state-of-the-art approaches (Ji and Grishman, 2008; Liao and Grishman, 2010; Hong et al., 2011) use sequential pipelines as building blocks, which break down the whole task into separate subtasks, such as trigger identification/classification and argument identification/classification. As a common drawback of the staged architecture, errors in upstream component are often compounded and propagated to the downstream classifiers. The downstream components, however, cannot impact earlier decisions. For example, consider the following sentences with an ambiguous word “fired”: (1) In Baghdad, a cameraman died when an American tank fired on the Palestine Hotel. (2) He has fired his air defense chief. In sentence (1), “fired” is a trigger of type Attack. Because of the ambiguity, a local classifier may miss it or mislabel it as a trigger of End-Position. However, knowing that “tank” is very likely to be an Instrument argument of Attack events, the correct event subtype assignment of “fired” is obviously Attack. Likewise, in sentence (2), “air defense chief” is a job title, hence the argument classifier is likely to label it as an Entity argument for End-Position trigger. In addition, the local classifiers are incapable of capturing inter-dependencies among multiple event triggers and arguments. Consider sentence (1) again. Figure 1 depicts the corresponding event triggers and arguments. The dependency between “fired” and “died” cannot be captured by the local classifiers, which may fail to attach “cameraman” to “fired” as a Target argument. By using global features, we can propagate the Victim argument of the Die event to the Target argument of the Attack event. As another example, knowing that an Attack event usually only has one Attacker argument, we could penalize assignments in which one trigger has more than one Attacker. Such global features cannot be easily exploited by a local classifier. Therefore, we take a fresh look at this problem and formulate it, for the first time, as a structured learning problem. We propose a novel joint event extraction algorithm to predict the triggers and arguments simultaneously, and use the structured perceptron (Collins, 2002) to train the joint model. This way we can capture the dependencies between triggers and argument as well as explore 73 In Baghdad, a cameraman died when an American tank fired on the Palestine Hotel. Attack Die Instrument Place Victim Target Instrument Target Place Figure 1: Event mentions of example (1). There are two event mentions that share three arguments, namely the Die event mention triggered by “died”, and the Attack event mention triggered by “fired”. arbitrary global features over multiple local predictions. However, different from easier tasks such as part-of-speech tagging or noun phrase chunking where efficient dynamic programming decoding is feasible, here exact joint inference is intractable. Therefore we employ beam search in decoding, and train the model using the early-update perceptron variant tailored for beam search (Collins and Roark, 2004; Huang et al., 2012). We make the following contributions: 1. Different from traditional pipeline approach, we present a novel framework for sentencelevel event extraction, which predicts triggers and their arguments jointly (Section 3). 2. We develop a rich set of features for event extraction which yield promising performance even with the traditional pipeline (Section 3.4.1). In this paper we refer to them as local features. 3. We introduce various global features to exploit dependencies among multiple triggers and arguments (Section 3.4.2). Experiments show that our approach outperforms the pipelined approach with the same set of local features, and significantly advances the state-of-the-art with the addition of global features which brings a notable further improvement (Section 4). 2 Event Extraction Task In this paper we focus on the event extraction task defined in Automatic Content Extraction (ACE) evaluation.1 The task defines 8 event types and 33 subtypes such as Attack, End-Position etc. We introduce the terminology of the ACE event extraction that we used in this paper: 1http://projects.ldc.upenn.edu/ace/ • Event mention: an occurrence of an event with a particular type and subtype. • Event trigger: the word most clearly expresses the event mention. • Event argument: an entity mention, temporal expression or value (e.g. Job-Title) that serves as a participant or attribute with a specific role in an event mention. • Event mention: an instance that includes one event trigger and some arguments that appear within the same sentence. Given an English text document, an event extraction system should predict event triggers with specific subtypes and their arguments from each sentence. Figure 1 depicts the event triggers and their arguments of sentence (1) in Section 1. The outcome of the entire sentence can be considered a graph in which each argument role is represented as a typed edge from a trigger to its argument. In this work, we assume that argument candidates such as entities are part of the input to the event extraction, and can be from either gold standard or IE system output. 3 Joint Framework for Event Extraction Based on the hypothesis that facts are interdependent, we propose to use structured perceptron with inexact search to jointly extract triggers and arguments that co-occur in the same sentence. In this section, we will describe the training and decoding algorithms for this model. 3.1 Structured perceptron with beam search Structured perceptron is an extension to the standard linear perceptron for structured prediction, which was proposed in (Collins, 2002). Given a sentence instance x ∈X, which in our case is a sentence with argument candidates, the structured perceptron involves the following decoding prob74 lem which finds the best configuration z ∈Y according to the current model w: z = argmax y′∈Y(x) w · f(x, y′) (1) where f(x, y′) represents the feature vector for instance x along with configuration y′. The perceptron learns the model w in an online fashion. Let D = {(x(j), y(j))}n j=1 be the set of training instances (with j indexing the current training instance). In each iteration, the algorithm finds the best configuration z for x under the current model (Eq. 1). If z is incorrect, the weights are updated as follows: w = w + f(x, y) −f(x, z) (2) The key step of the training and test is the decoding procedure, which aims to search for the best configuration under the current parameters. In simpler tasks such as part-of-speech tagging and noun phrase chunking, efficient dynamic programming algorithms can be employed to perform exact inference. Unfortunately, it is intractable to perform the exact search in our framework because: (1) by jointly modeling the trigger labeling and argument labeling, the search space becomes much more complex. (2) we propose to make use of arbitrary global features, which makes it infeasible to perform exact inference efficiently. To address this problem, we apply beam-search along with early-update strategy to perform inexact decoding. Collins and Roark (2004) proposed the early-update idea, and Huang et al. (2012) later proved its convergence and formalized a general framework which includes it as a special case. Figure 2 describes the skeleton of perceptron training algorithm with beam search. In each step of the beam search, if the prefix of oracle assignment y falls out from the beam, then the top result in the beam is returned for early update. One could also use the standard-update for inference, however, with highly inexact search the standardupdate generally does not work very well because of “invalid updates”, i.e., updates that do not fix a violation (Huang et al., 2012). In Section 4.5 we will show that the standard perceptron introduces many invalid updates especially with smaller beam sizes, also observed by Huang et al. (2012). To reduce overfitting, we used averaged parameters after training to decode test instances in our experiments. The resulting model is called averaged perceptron (Collins, 2002). Input: Training set D = {(x(j), y(j))}n i=1, maximum iteration number T Output: Model parameters w 1 Initialization: Set w = 0; 2 for t ←1...T do 3 foreach (x, y) ∈D do 4 z ←beamSearch (x, y, w) 5 if z ̸= y then 6 w ←w + f(x, y[1:|z|]) −f(x, z) Figure 2: Perceptron training with beamsearch (Huang et al., 2012). Here y[1:i] denotes the prefix of y that has length i, e.g., y[1:3] = (y1, y2, y3). 3.2 Label sets Here we introduce the label sets for trigger and argument in the model. We use L ∪{⊥} to denote the trigger label alphabet, where L represents the 33 event subtypes, and ⊥indicates that the token is not a trigger. Similarly, R ∪{⊥} denotes the argument label sets, where R is the set of possible argument roles, and ⊥means that the argument candidate is not an argument for the current trigger. It is worth to note that the set R of each particular event subtype is subject to the entity type constraints defined in the official ACE annotation guideline2. For example, the Attacker argument for an Attack event can only be one of PER, ORG and GPE (Geo-political Entity). 3.3 Decoding Let x = ⟨(x1, x2, ..., xs), E⟩denote the sentence instance, where xi represents the i-th token in the sentence and E = {ek}m k=1 is the set of argument candidates. We use y = (t1, a1,1, . . . , a1,m, . . . , ts, as,1, . . . , as,m) to denote the corresponding gold standard structure, where ti represents the trigger assignment for the token xi, and ai,k represents the argument role label for the edge between xi and argument candidate ek. 2http://projects.ldc.upenn.edu/ace/docs/English-EventsGuidelines v5.4.3.pdf 75 y = (t1, a1,1, a1,2, t2, a2,1, a2,2, | {z } argum ents for x2 t3, a3,1, a3,2) g(1) g(2) h(2, 1) h(3, 2) Figure 3: Example notation with s = 3, m = 2. For simplicity, throughout this paper we use yg(i) and yh(i,k) to represent ti and ai,k, respectively. Figure 3 demonstrates the notation with s = 3 and m = 2. The variables for the toy sentence “Jobs founded Apple” are as follows: x = ⟨(Jobs, x2 z }| { founded, Apple), E z }| { {JobsPER, AppleORG}⟩ y = (⊥, ⊥, ⊥, Start Org | {z } t2 , Agent, Org | {z } args for founded , ⊥, ⊥, ⊥) Figure 4 describes the beam-search procedure with early-update for event extraction. During each step with token i, there are two sub-steps: • Trigger labeling We enumerate all possible trigger labels for the current token. The linear model defined in Eq. (1) is used to score each partial configuration. Then the K-best partial configurations are selected to the beam, assuming the beam size is K. • Argument labeling After the trigger labeling step, we traverse all configurations in the beam. Once a trigger label for xi is found in the beam, the decoder searches through the argument candidates E to label the edges between each argument candidate and the trigger. After labeling each argument candidate, we again score each partial assignment and select the K-best results to the beam. After the second step, the rank of different trigger assignments can be changed because of the argument edges. Likewise, the decision on later argument candidates may be affected by earlier argument assignments. The overall time complexity for decoding is O(K · s · m). 3.4 Features In this framework, we define two types of features, namely local features and global features. We first introduce the definition of local and global features in this paper, and then describe the implementation details later. Recall that in the linear model defined in Eq. (1), f(x, y) denotes the features extracted from the input instance x along Input: Instance x = ⟨(x1, x2, ..., xs), E⟩and the oracle output y if for training. K: Beam size. L ∪{⊥}: trigger label alphabet. R ∪{⊥}: argument label alphabet. Output: 1-best prediction z for x 1 Set beam B ←[ϵ] /*empty configuration*/ 2 for i ←1...s do 3 buf ←{z′ ◦l | z′ ∈B, l ∈L ∪{⊥}} B ←K-best(buf ) 4 if y[1:g(i)] ̸∈B then 5 return B[0] /*for early-update*/ 6 for ek ∈E do /*search for arguments*/ 7 buf ←∅ 8 for z′ ∈B do 9 buf ←buf ∪{z′ ◦⊥} 10 if z′ g(i) ̸= ⊥then /*xi is a trigger*/ 11 buf ←buf ∪{z′ ◦r | r ∈R} 12 B ←K-best(buf ) 13 if y[1:h(i,k)] ̸∈B then 14 return B[0] /*for early-update*/ 15 return B[0] Figure 4: Decoding algorithm for event extraction. z◦l means appending label l to the end of z. During test, lines 4-5 & 13-14 are omitted. with configuration y. In general, each feature instance f in f is a function f : X × Y →R, which maps x and y to a feature value. Local features are only related to predictions on individual trigger or argument. In the case of unigram tagging for trigger labeling, each local feature takes the form of f(x, i, yg(i)), where i denotes the index of the current token, and yg(i) is its trigger label. In practice, it is convenient to define the local feature function as an indicator function, for example: f1(x, i, yg(i)) = ( 1 if yg(i) = Attack and xi = “fire” 0 otherwise The global features, by contrast, involve longer range of the output structure. Formally, each global feature function takes the form of f(x, i, k, y), where i and k denote the indices of the current token and argument candidate in decoding, respectively. The following indicator function is a simple example of global features: f101(x, i, k, y) =      1 if yg(i) = Attack and y has only one “Attacker” 0 otherwise 76 Category Type Feature Description Trigger Lexical 1. unigrams/bigrams of the current and context words within the window of size 2 2. unigrams/bigrams of part-of-speech tags of the current and context words within the window of size 2 3. lemma and synonyms of the current token 4. base form of the current token extracted from Nomlex (Macleod et al., 1998) 5. Brown clusters that are learned from ACE English corpus (Brown et al., 1992; Miller et al., 2004; Sun et al., 2011). We used the clusters with prefixes of length 13, 16 and 20 for each token. Syntactic 6. dependent and governor words of the current token 7. dependency types associated the current token 8. whether the current token is a modifier of job title 9. whether the current token is a non-referential pronoun Entity Information 10. unigrams/bigrams normalized by entity types 11. dependency features normalized by entity types 12. nearest entity type and string in the sentence/clause Argument Basic 1. context words of the entity mention 2. trigger word and subtype 3. entity type, subtype and entity role if it is a geo-political entity mention 4. entity mention head, and head of any other name mention from co-reference chain 5. lexical distance between the argument candidate and the trigger 6. the relative position between the argument candidate and the trigger: {before, after, overlap, or separated by punctuation} 7. whether it is the nearest argument candidate with the same type 8. whether it is the only mention of the same entity type in the sentence Syntactic 9. dependency path between the argument candidate and the trigger 10. path from the argument candidate and the trigger in constituent parse tree 11. length of the path between the argument candidate and the trigger in dependency graph 12. common root node and its depth of the argument candidate and parse tree 13. whether the argument candidate and the trigger appear in the same clause Table 1: Local features. 3.4.1 Local features In general there are two kinds of local features: Trigger features The local feature function for trigger labeling can be factorized as f(x, i, yg(i)) = p(x, i) ◦q(yg(i)), where p(x, i) is a predicate about the input, which we call text feature, and q(yg(i)) is a predicate on the trigger label. In practice, we define two versions of q(yg(i)): q0(yg(i)) = yg(i) (event subtype) q1(yg(i)) = event type of yg(i) q1(yg(i)) is a backoff version of the standard unigram feature. Some text features for the same event type may share a certain distributional similarity regardless of the subtypes. For example, if the nearest entity mention is “Company”, the current token is likely to be Personnel no matter whether it is End-Postion or Start-Position. Argument features Similarly, the local feature function for argument labeling can be represented as f(x, i, k, yg(i), yh(i,k)) = p(x, i, k) ◦ q(yg(i), yh(i,k)), where yh(i,k) denotes the argument assignment for the edge between trigger word i and argument candidate ek. We define two versions of q(yg(i), yh(i,k)): q0(yg(i), yh(i,k)) =      yh(i,k) if yh(i,k) is Place, Time or None yg(i) ◦yh(i,k) otherwise q1(yg(i), yh(i,k)) = ( 1 if yh(i,k) ̸=None 0 otherwise It is notable that Place and Time arguments are applicable and behave similarly to all event subtypes. Therefore features for these arguments are not conjuncted with trigger labels. q1(yh(i,k)) can be considered as a backoff version of q0(yh(i,k)), which does not discriminate different argument roles but only focuses on argument identification. Table 1 summarizes the text features about the input for trigger and argument labeling. In our experiments, we used the Stanford parser (De Marneffe et al., 2006) to create dependency parses. 3.4.2 Global features Table 2 summarizes the 8 types of global features we developed in this work. They can be roughly divided into the following two categories: 77 Category Feature Description Trigger 1. bigram of trigger types occur in the same sentence or the same clause 2. binary feature indicating whether synonyms in the same sentence have the same trigger label 3. context and dependency paths between two triggers conjuncted with their types Argument 4. context and dependency features about two argument candidates which share the same role within the same event mention 5. features about one argument candidate which plays as arguments in two event mentions in the same sentence 6. features about two arguments of an event mention which are overlapping 7. the number of arguments with each role type of an event mention conjuncted with the event subtype 8. the pairs of time arguments within an event mention conjuncted with the event subtype Table 2: Global features. Transport (transport) E ntity (women) E ntity (children) Artifact Artifact conj and (a) E ntity (cameramen) D ie (died) A ttack (fired) Victim Target advcl (b) E nd-P osition (resigned) E ntity E ntity [co-chief executive of [V ivendiU niversalE ntertainm ent]] Position Entity O verlapping (c) Figure 5: Illustration of global features (4-6) in Table 2. Event Probability Attack 0.34 Die 0.14 Transport 0.08 Injure 0.04 Meet 0.02 Table 3: Top 5 event subtypes that co-occur with Attack event in the same sentence. Trigger global feature This type of feature captures the dependencies between two triggers within the same sentence. For instance: feature (1) captures the co-occurrence of trigger types. This kind of feature is motivated by the fact that two event mentions in the same sentence tend to be semantically coherent. As an example, from Table 3 we can see that Attack event often co-occur with Die event in the same sentence, but rarely co-occur with Start-Position event. Feature (2) encourages synonyms or identical tokens to have the same label. Feature (3) exploits the lexical and syntactic relation between two triggers. A simple example is whether an Attack trigger and a Die trigger are linked by the dependency relation conj and. Argument global feature This type of feature is defined over multiple arguments for the same or different triggers. Consider the following sentence: (3) Trains running to southern Sudan were used to transport abducted women and children. The Transport event mention “transport” has two Artifact arguments, “women” and “children”. The dependency edge conj and between “women” and “children” indicates that they should play the same role in the event mention. The triangle structure in Figure 5(a) is an example of feature (4) for the above example. This feature encourages entities that are linked by dependency relation conj and to play the same role Artifact in any Transport event. Similarly, Figure 5(b) depicts an example of feature (5) for sentence (1) in Section 1. In this example, an entity mention is Victim argument to Die event and Target argument to Attack event, and the two event triggers are connected by the typed dependency advcl. Here advcl means that the word “fired” is an adverbial clause modier of “died”. Figure 5(c) shows an example of feature (6) for the following sentence: (4) Barry Diller resigned as co-chief executive of Vivendi Universal Entertainment. The job title “co-chief executive of Vivendi Universal Entertainment” overlaps with the Organization mention “Vivendi Universal Entertainment”. The feature in the triangle shape can be considered as a soft constraint such that if a JobTitle mention is a Position argument to an EndPosition trigger, then the Organization mention 78 which appears at the end of it should be labeled as Entity argument for the same trigger. Feature (7-8) are based on the statistics about different arguments for the same trigger. For instance, in many cases, a trigger can only have one Place argument. If a partial configuration mistakenly classifies more than one entity mention as Place arguments for the same trigger, then it will be penalized. 4 Experiments 4.1 Data set and evaluation metric We utilized the ACE 2005 corpus as our testbed. For comparison, we used the same test set with 40 newswire articles (672 sentences) as in (Ji and Grishman, 2008; Liao and Grishman, 2010) for the experiments, and randomly selected 30 other documents (863 sentences) from different genres as the development set. The rest 529 documents (14, 840 sentences) are used for training. Following previous work (Ji and Grishman, 2008; Liao and Grishman, 2010; Hong et al., 2011), we use the following criteria to determine the correctness of an predicted event mention: • A trigger is correct if its event subtype and offsets match those of a reference trigger. • An argument is correctly identified if its event subtype and offsets match those of any of the reference argument mentions. • An argument is correctly identified and classified if its event subtype, offsets and argument role match those of any of the reference argument mentions. Finally we use Precision (P), Recall (R) and Fmeasure (F1) to evaluate the overall performance. 4.2 Baseline system Chen and Ng (2012) have proven that performing identification and classification in one step is better than two steps. To compare our proposed method with the previous pipelined approaches, we implemented two Maximum Entropy (MaxEnt) classifiers for trigger labeling and argument labeling respectively. To make a fair comparison, the feature sets in the baseline are identical to the local text features we developed in our framework (see Figure 1). 4.3 Training curves We use the harmonic mean of the trigger’s F1 measure and argument’s F1 measure to measure the performance on the development set. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 # of training iteration 0.44 0.46 0.48 0.50 0.52 0.54 0.56 0.58 0.60 Harmonic mean local+global local Figure 6: Training curves on dev set. Figure 6 shows the training curves of the averaged perceptron with respect to the performance on the development set when the beam size is 4. As we can see both curves converge around iteration 20 and the global features improve the overall performance, compared to its counterpart with only local features. Therefore we set the number of iterations as 20 in the remaining experiments. 4.4 Impact of beam size The beam size is an important hyper parameter in both training and test. Larger beam size will increase the computational cost while smaller beam size may reduce the performance. Table 4 shows the performance on the development set with several different beam sizes. When beam size = 4, the algorithm achieved the highest performance on the development set with trigger F1 = 67.9, argument F1 = 51.5, and harmonic mean = 58.6. When the size is increased to 32, the accuracy was not improved. Based on this observation, we chose beam size as 4 for the remaining experiments. 4.5 Early-update vs. standard-update Huang et al. (2012) define “invalid update” to be an update that does not fix a violation (and instead reinforces the error), and show that it strongly (anti-)correlates with search quality and learning quality. Figure 7 depicts the percentage of invalid updates in standard-update with and without global features, respectively. With global features, there are numerous invalid updates when the 79 Beam size 1 2 4 8 16 32 Training time (sec) 993 2,034 3,982 8,036 15,878 33,026 Harmonic mean 57.6 57.7 58.6 58.0 57.8 57.8 Table 4: Comparison of training time and accuracy on the dev set. 1 2 4 8 16 32 beam size 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 % of invalid updates local+global local Figure 7: Percentage of the so-called “invalid updates” (Huang et al., 2012) in standard perceptron. Strategy F1 on Dev F1 on Test Trigger Arg Trigger Arg Standard (b = 1) 68.3 47.4 64.4 49.8 Early (b = 1) 68.9 49.5 65.2 52.1 Standard (b = 4) 68.4 50.5 67.1 51.4 Early (b = 4) 67.9 51.5 67.5 52.7 Table 5: Comparison between the performance (%) of standard-update and early-update with global features. Here b stands for beam size. beam size is small. The ratio decreases monotonically as beam size increases. The model with only local features made much smaller numbers of invalid updates, which suggests that the use of global features makes the search problem much harder. This observation justify the application of early-update in this work. To further investigate the difference between early-update and standardupdate, we tested the performance of both strategies, which is summarized in Table 5. As we can see the performance of standard-update is generally worse than early-update. When the beam size is increased (b = 4), the gap becomes smaller as the ratio of invalid updates is reduced. 4.6 Overall performance Table 6 shows the overall performance on the blind test set. In addition to our baseline, we compare against the sentence-level system reported in Hong et al. (2011), which, to the best of our knowledge, is the best-reported system in the literature based on gold standard argument candidates. The proposed joint framework with local features achieves comparable performance for triggers and outperforms the staged baseline especially on arguments. By adding global features, the overall performance is further improved significantly. Compared to the staged baseline, it gains 1.6% improvement on trigger’s F-measure and 8.8% improvement on argument’s F-measure. Remarkably, compared to the cross-entity approach reported in (Hong et al., 2011), which attained 68.3% F1 for triggers and 48.3% for arguments, our approach with global features achieves even better performance on argument labeling although we only used sentencelevel information. We also tested the performance with argument candidates automatically extracted by a highperforming name tagger (Li et al., 2012b) and an IE system (Grishman et al., 2005). The results are summarized in Table 7. The joint approach with global features significantly outperforms the baseline and the model with only local features. We also show that it outperforms the sentencelevel baseline reported in (Ji and Grishman, 2008; Liao and Grishman, 2010), both of which attained 59.7% F1 for triggers and 36.6% for arguments. Our approach aims to tackle the problem of sentence-level event extraction, thereby only used intra-sentential evidence. Nevertheless, the performance of our approach is still comparable with the best-reported methods based on cross-document and cross-event inference (Ji and Grishman, 2008; Liao and Grishman, 2010). 5 Related Work Most recent studies about ACE event extraction rely on staged pipeline which consists of separate local classifiers for trigger labeling and argument labeling (Grishman et al., 2005; Ahn, 2006; Ji and Grishman, 2008; Chen and Ji, 2009; Liao and Grishman, 2010; Hong et al., 2011; Li et al., 2012a; Chen and Ng, 2012). To the best of our knowledge, our work is the first attempt to jointly model these two ACE event subtasks. 80 Methods Trigger Identification (%) Trigger Identification + classification (%) Argument Identification (%) Argument Role (%) P R F1 P R F1 P R F1 P R F1 Sentence-level in Hong et al. (2011) N/A 67.6 53.5 59.7 46.5 37.15 41.3 41.0 32.8 36.5 Staged MaxEnt classifiers 76.2 60.5 67.4 74.5 59.1 65.9 74.1 37.4 49.7 65.4 33.1 43.9 Joint w/ local features 77.4 62.3 69.0 73.7 59.3 65.7 69.7 39.6 50.5 64.1 36.5 46.5 Joint w/ local + global features 76.9 65.0 70.4 73.7 62.3 67.5 69.8 47.9 56.8 64.7 44.4 52.7 Cross-entity in Hong et al. (2011)† N/A 72.9 64.3 68.3 53.4 52.9 53.1 51.6 45.5 48.3 Table 6: Overall performance with gold-standard entities, timex, and values. †beyond sentence level. Methods Trigger F1 Arg F1 Ji and Grishman (2008) cross-doc Inference 67.3 42.6 Ji and Grishman (2008) sentence-level 59.7 36.6 MaxEnt classifiers 64.7 (↓1.2) 33.7 (↓10.2) Joint w/ local 63.7 (↓2.0) 35.8 (↓10.7) Joint w/ local + global 65.6 (↓1.9) 41.8 (↓10.9) Table 7: Overall performance (%) with predicted entities, timex, and values. ↓indicates the performance drop from experiments with gold-standard argument candidates (see Table 6). For the Message Understanding Conference (MUC) and FAS Program for Monitoring Emerging Diseases (ProMED) event extraction tasks, Patwardhan and Riloff (2009) proposed a probabilistic framework to extract event role fillers conditioned on the sentential event occurrence. Besides having different task definitions, the key difference from our approach is that their role filler recognizer and sentential event recognizer are trained independently but combined in the test stage. Our experiments, however, have demonstrated that it is more advantageous to do both training and testing with joint inference. There has been some previous work on joint modeling for biomedical events (Riedel and McCallum, 2011a; Riedel et al., 2009; McClosky et al., 2011; Riedel and McCallum, 2011b). (McClosky et al., 2011) is most closely related to our approach. They casted the problem of biomedical event extraction as a dependency parsing problem. The key assumption that event structure can be considered as trees is incompatible with ACE event extraction. In addition, they used a separate classifier to predict the event triggers before applying the parser, while we extract the triggers and argument jointly. Finally, the features in the parser are edge-factorized. To exploit global features, they applied a MaxEnt-based global re-ranker. In comparison, our approach is a unified framework based on beam search, which allows us to exploit arbitrary global features efficiently. 6 Conclusions and Future Work We presented a joint framework for ACE event extraction based on structured perceptron with inexact search. As opposed to traditional pipelined approaches, we re-defined the task as a structured prediction problem. The experiments proved that the perceptron with local features outperforms the staged baseline and the global features further improve the performance significantly, surpassing the current state-of-the-art by a large margin. As shown in Table 7, the overall performance drops substantially when using predicted argument candidates. To improve the accuracy of endto-end IE system, we plan to develop a complete joint framework to recognize entities together with event mentions for future work. Also we are interested in applying this framework to other IE tasks such as relation extraction. Acknowledgments This work was supported by the U.S. Army Research Laboratory under Cooperative Agreement No. W911NF-09-2-0053 (NS-CTA), U.S. NSF CAREER Award under Grant IIS-0953149, U.S. NSF EAGER Award under Grant No. IIS1144111, U.S. DARPA Award No. FA8750-13-20041 in the “Deep Exploration and Filtering of Text” (DEFT) Program, a CUNY Junior Faculty Award, and Queens College equipment funds. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on. 81 References David Ahn. 2006. The stages of event extraction. In Proceedings of the Workshop on Annotating and Reasoning about Time and Events, pages 1–8. Peter F Brown, Peter V Desouza, Robert L Mercer, Vincent J Della Pietra, and Jenifer C Lai. 1992. Class-based n-gram models of natural language. Computational linguistics, 18(4):467–479. Zheng Chen and Heng Ji. 2009. Language specific issue and feature exploration in chinese event extraction. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers, pages 209–212. Chen Chen and Vincent Ng. 2012. Joint modeling for chinese event extraction with rich linguistic features. In COLING, pages 529–544. Michael Collins and Brian Roark. 2004. Incremental parsing with the perceptron algorithm. In Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics, page 111. Michael Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proceedings of the ACL-02 conference on Empirical methods in natural language processing-Volume 10, pages 1–8. Marie-Catherine De Marneffe, Bill MacCartney, and Christopher D Manning. 2006. Generating typed dependency parses from phrase structure parses. In Proceedings of LREC, volume 6, pages 449–454. Ralph Grishman, David Westbrook, and Adam Meyers. 2005. Nyu’s english ace 2005 system description. In Proceedings of ACE 2005 Evaluation Workshop. Washington. Yu Hong, Jianfeng Zhang, Bin Ma, Jian-Min Yao, Guodong Zhou, and Qiaoming Zhu. 2011. Using cross-entity inference to improve event extraction. In Proceedings of ACL, pages 1127–1136. Liang Huang, Suphan Fayong, and Yang Guo. 2012. Structured perceptron with inexact search. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 142–151. Heng Ji and Ralph Grishman. 2008. Refining event extraction through cross-document inference. In Proceedings of ACL, pages 254–262. Peifeng Li, Guodong Zhou, Qiaoming Zhu, and Libin Hou. 2012a. Employing compositional semantics and discourse consistency in chinese event extraction. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1006–1016. Qi Li, Haibo Li, Heng Ji, Wen Wang, Jing Zheng, and Fei Huang. 2012b. Joint bilingual name tagging for parallel corpora. In Proceedings of the 21st ACM international conference on Information and knowledge management, pages 1727–1731. Shasha Liao and Ralph Grishman. 2010. Using document level cross-event inference to improve event extraction. In Proceedings of ACL, pages 789–797. Catherine Macleod, Ralph Grishman, Adam Meyers, Leslie Barrett, and Ruth Reeves. 1998. Nomlex: A lexicon of nominalizations. In Proceedings of EURALEX, volume 98, pages 187–193. David McClosky, Mihai Surdeanu, and Christopher D. Manning. 2011. Event extraction as dependency parsing. In Proceedings of ACL, pages 1626–1635. Scott Miller, Jethran Guinness, and Alex Zamanian. 2004. Name tagging with word clusters and discriminative training. In Proceedings of HLT-NAACL, volume 4, pages 337–342. Siddharth Patwardhan and Ellen Riloff. 2009. A unified model of phrasal and sentential evidence for information extraction. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1-Volume 1, pages 151– 160. Sebastian Riedel and Andrew McCallum. 2011a. Fast and robust joint models for biomedical event extraction. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1– 12. Sebastian Riedel and Andrew McCallum. 2011b. Robust biomedical event extraction with dual decomposition and minimal domain adaptation. In Proceedings of the BioNLP Shared Task 2011 Workshop, pages 46–50. Sebastian Riedel, Hong-Woo Chun, Toshihisa Takagi, and Jun’ichi Tsujii. 2009. A markov logic approach to bio-molecular event extraction. In Proceedings of the Workshop on Current Trends in Biomedical Natural Language Processing: Shared Task, pages 41–49. Ang Sun, Ralph Grishman, and Satoshi Sekine. 2011. Semi-supervised relation extraction with large-scale word clustering. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 521–529. 82
2013
8
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 811–821, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Shallow Local Multi Bottom-up Tree Transducers in Statistical Machine Translation Fabienne Braune and Nina Seemann and Daniel Quernheim and Andreas Maletti Institute for Natural Language Processing, University of Stuttgart Pfaffenwaldring 5b, 70569 Stuttgart, Germany {braunefe,seemanna,daniel,maletti}@ims.uni-stuttgart.de Abstract We present a new translation model integrating the shallow local multi bottomup tree transducer. We perform a largescale empirical evaluation of our obtained system, which demonstrates that we significantly beat a realistic tree-to-tree baseline on the WMT 2009 English →German translation task. As an additional contribution we make the developed software and complete tool-chain publicly available for further experimentation. 1 Introduction Besides phrase-based machine translation systems (Koehn et al., 2003), syntax-based systems have become widely used because of their ability to handle non-local reordering. Those systems use synchronous context-free grammars (Chiang, 2007), synchronous tree substitution grammars (Eisner, 2003) or even more powerful formalisms like synchronous tree-sequence substitution grammars (Sun et al., 2009). However, those systems use linguistic syntactic annotation at different levels. For example, the systems proposed by Wu (1997) and Chiang (2007) use no linguistic information and are syntactic in a structural sense only. Huang et al. (2006) and Liu et al. (2006) use syntactic annotations on the source language side and show significant improvements in translation quality. Using syntax exclusively on the target language side has also been successfully tried by Galley et al. (2004) and Galley et al. (2006). Nowadays, open-source toolkits such as Moses (Koehn et al., 2007) offer syntax-based components (Hoang et al., 2009), which allow experiments without expert knowledge. The improvements observed for systems using syntactic annotation on either the source or the target language side naturally led to experiments with models that use syntactic annotations on both sides. However, as noted by Lavie et al. (2008), Liu et al. (2009), and Chiang (2010), the integration of syntactic information on both sides tends to decrease translation quality because the systems become too restrictive. Several strategies such as (i) using parse forests instead of single parses (Liu et al., 2009) or (ii) soft syntactic constraints (Chiang, 2010) have been developed to alleviate this problem. Another successful approach has been to switch to more powerful formalisms, which allow the extraction of more general rules. A particularly powerful model is the non-contiguous version of synchronous tree-sequence substitution grammars (STSSG) of Zhang et al. (2008a), Zhang et al. (2008b), and Sun et al. (2009), which allows sequences of trees on both sides of the rules [see also (Raoult, 1997)]. The multi bottom-up tree transducer (MBOT) of Arnold and Dauchet (1982) and Lilin (1978) offers a middle ground between traditional syntax-based models and STSSG. Roughly speaking, an MBOT is an STSSG, in which all the discontinuities must occur on the target language side (Maletti, 2011). This restriction yields many algorithmic advantages over both the traditional models as well as STSSG as demonstrated by Maletti (2010). Formally, they are expressive enough to express all sensible translations (Maletti, 2012)1. Figure 2 displays sample rules of the MBOT variant, called ℓMBOT, that we use (in a graphical representation of the trees and the alignment). In this contribution, we report on our novel statistical machine translation system that uses an ℓMBOT-based translation model. The theoretical foundations of ℓMBOT and their integration into our translation model are presented in Sections 2 and 3. In order to empirically evaluate the ℓMBOT model, we implemented a machine trans1A translation is sensible if it is of linear size increase and can be computed by some (potentially copying) top-down tree transducer. 811 Sε NP1 JJ11 Official111 NNS12 forecasts121 VP2 VBD21 predicted211 NP22 QP221 RB2211 just22111 CD2212 322121 NN222 %2221 Figure 1: Example tree t with indicated positions. We have t(21) = VBD and t|221 is the subtree marked in red. lation system that we are going to make available to the public. We implemented ℓMBOT inside the syntax-based component of the Moses open source toolkit. Section 4 presents the most important algorithms of our ℓMBOT decoder. We evaluate our new system on the WMT 2009 shared translation task English →German. The translation quality is automatically measured using BLEU scores, and we confirm the findings by providing linguistic evidence (see Section 5). Note that in contrast to several previous approaches, we perform large scale experiments by training systems with approx. 1.5 million parallel sentences. 2 Theoretical Model In this section, we present the theoretical generative model used in our approach to syntax-based machine translation. Essentially, it is the local multi bottom-up tree transducer of Maletti (2011) with the restriction that all rules must be shallow, which means that the left-hand side of each rule has height at most 2 (see Figure 2 for shallow rules and Figure 4 for rules including non-shallow rules). The rules extracted from the training example of Figure 3 are displayed in Figure 4. Those extracted rules are forcibly made shallow by removing internal nodes. The application of those rules is illustrated in Figures 5 and 6. For those that want to understand the inner workings, we recall the principal model in full detail in the rest of this section. Since we utilize syntactic parse trees, let us introduce trees first. Given an alphabet Σ of labels, the set TΣ of all Σ-trees is the smallest set T such that σ(t1, . . . , tk) ∈T for all σ ∈Σ, integer k ≥0, and t1, . . . , tk ∈T. Intuitively, a tree t consists of a labeled root node σ followed by a sequence t1, . . . , tk of its children. A tree t ∈TΣ is shallow if t = σ(t1, . . . , tk) with σ ∈Σ and t1, . . . , tk ∈Σ. NP QP NN →  PP von AP NN  S NP VBD NP →  S NP VAFIN PP VVPP  Figure 2: Sample ℓMBOT rules. To address a node inside a tree, we use its position, which is a word consisting of positive integers. Roughly speaking, the root of a tree is addressed with the position ε (the empty word). The position iw with i ∈N addresses the position w in the ith direct child of the root. In this way, each node in the tree is assigned a unique position. We illustrate this notion in Figure 1. Formally, the positions pos(t) ⊆N∗of a tree t = σ(t1, . . . , tk) are inductively defined by pos(t) = {ε} ∪pos(k)(t1, . . . , tk), where pos(k)(t1, . . . , tk) = [ 1≤i≤k {iw | w ∈pos(ti)} . Let t ∈TΣ and w ∈pos(t). The label of t at position w is t(w), and the subtree rooted at position w is t|w. These notions are also illustrated in Figure 1. A position w ∈pos(t) is a leaf (in t) if w1 /∈pos(t). In other words, leaves do not have any children. Given a subset N ⊆Σ, we let leafN(t) = {w ∈pos(t) | t(w) ∈N, w leaf in t} be the set of all leaves labeled by elements of N. When N is the set of nonterminals, we call them leaf nonterminals. We extend this notion to sequences t1, . . . , tk ∈TΣ by leaf(k) N (t1, . . . , tk) = [ 1≤i≤k {iw | w ∈leafN(ti)}. Let w1, . . . , wn ∈pos(t) be (pairwise prefixincomparable) positions and t1, . . . , tn ∈TΣ. Then t[wi ←ti]1≤i≤n denotes the tree that is obtained from t by replacing (in parallel) the subtrees at wi by ti for every 1 ≤i ≤n. Now we are ready to introduce our model, which is a minor variation of the local multi bottom-up tree transducer of Maletti (2011). Let Σ and ∆be the input and output symbols, respectively, and let N ⊆Σ ∪∆be the set of nonterminal symbols. Essentially, the model works on pairs ⟨t, (u1, . . . , uk)⟩consisting of an input tree t ∈TΣ 812 and a sequence u1, . . . , uk ∈T∆of output trees. Such pairs are pre-translations of rank k. The pretranslation ⟨t, (u1, . . . , uk)⟩is shallow if all trees t, u1, . . . , uk in it are shallow. Together with a pre-translation we typically have to store an alignment. Given a pre-translation ⟨t, (u1, . . . , uk)⟩of rank k and 1 ≤i ≤k, we call ui the ith translation of t. An alignment for this pre-translation is an injective mapping ψ: leaf(k) N (u1, . . . , uk) →leafN(t)×N such that if (w, j) ∈ran(ψ), then also (w, i) ∈ran(ψ) for all 1 ≤j ≤i.2 In other words, if an alignment requests the ith translation, then it should also request all previous translations. Definition 1 A shallow local multi bottom-up tree transducer (ℓMBOT) is a finite set R of rules together with a mapping c: R →R such that every rule, written t →ψ (u1, . . . , uk), contains a shallow pre-translation ⟨t, (u1, . . . , uk)⟩and an alignment ψ for it. The components t, (u1, . . . , uk), ψ, and c(ρ) are called the left-hand side, the right-hand side, the alignment, and the weight of the rule ρ = t →ψ (u1, . . . , uk). Figure 2 shows two example ℓMBOT rules (without weights). Overall, the rules of an ℓMBOT are similar to the rules of an SCFG (synchronous context-free grammar), but our right-hand sides contain a sequence of trees instead of just a single tree. In addition, the alignments in an SCFG rule are bijective between leaf nonterminals, whereas our model permits multiple alignments to a single leaf nonterminal in the left-hand side (see Figure 2). Our ℓMBOT rules are obtained automatically from data like that in Figure 3. Thus, we (word) align the bilingual text and parse it in both the source and the target language. In this manner we obtain sentence pairs like the one shown in Figure 3. To these sentence pairs we apply the rule extraction method of Maletti (2011). The rules extracted from the sentence pair of Figure 3 are shown in Figure 4. Note that these rules are not necessarily shallow (the last two rules are not). Thus, we post-process the extracted rules and make them shallow. The shallow rules corresponding to the non-shallow rules of Figure 4 are shown in Figure 2. Next, we define how to combine rules to form derivations. In contrast to most other models, we 2ran(f) for a mapping f : A →B denotes the range of f, which is {f(a) | a ∈A}. S NP JJ Official NNS forecasts VP VBD predicted NP QP RB just CD 3 NN % S NP ADJA Offizielle NN Prognosen VAFIN sind VP PP APPR von AP ADV nur CARD 3 NN % VVPP ausgegangen Figure 3: Aligned parsed sentences. only introduce a derivation semantics that does not collapse multiple derivations for the same input-output pair.3 We need one final notion. Let ρ = t →ψ (u1, . . . , uk) be a rule and w ∈leafN(t) be a leaf nonterminal (occurrence) in the left-hand side. The w-rank rk(ρ, w) of the rule ρ is rk(ρ, w) = max {i ∈N | (w, i) ∈ran(ψ)} . For example, for the lower rule ρ in Figure 2 we have rk(ρ, 1) = 1, rk(ρ, 2) = 2, and rk(ρ, 3) = 1. Definition 2 The set τ(R, c) of weighted pretranslations of an ℓMBOT (R, c) is the smallest set T subject to the following restriction: If there exist • a rule ρ = t →ψ (u1, . . . , uk) ∈R, • a weighted pre-translation ⟨tw, cw, (uw 1 , . . . , uw kw)⟩∈T for every w ∈leafN(t) with – rk(ρ, w) = kw,4 – t(w) = tw(ε),5 and – for every iw′ ∈leaf(k) N (u1, . . . , uk),6 ui(w′) = uv j(ε) with ψ(iw′) = (v, j), then ⟨t′, c′, (u′ 1, . . . , u′ k)⟩∈T is a weighted pretranslation, where • t′ = t[w ←tw | w ∈leafN(t)], 3A standard semantics is presented, for example, in (Maletti, 2011). 4If w has n alignments, then the pre-translation selected for it has to have suitably many output trees. 5The labels have to coincide for the input tree. 6Also the labels for the output trees have to coincide. 813 JJ Official →  ADJA Offizielle  NNS forecasts →  NN Prognosen  VBD predicted →  VAFIN sind , VVPP ausgegangen  RB just →  ADV nur  CD 3 →  CARD 3  NN % →  NN %  NP JJ NNS →  NP ADJA NN  QP RB CD →  AP ADV CARD  NP QP NN →  PP APPR von AP NN  S NP VP VBD NP →  S NP VAFIN VP PP VVPP  Figure 4: Extracted (even non-shallow) rules. We obtain our rules by making those rules shallow. • c′ = c(ρ) · Q w∈leafN(t) cw, and • u′ i = ui[iw′ ←uv j | ψ(iw′) = (v, j)] for every 1 ≤i ≤k. Rules that do not contain any nonterminal leaves are automatically weighted pre-translations with their associated rule weight. Otherwise, each nonterminal leaf w in the left-hand side of a rule ρ must be replaced by the input tree tw of a pretranslation ⟨tw, cw, (uw 1 , . . . , uw kw)⟩, whose root is labeled by the same nonterminal. In addition, the rank rk(ρ, w) of the replaced nonterminal should match the number kw of components in the selected weighted pre-translation. Finally, the nonterminals in the right-hand side that are aligned to w should be replaced by the translation that the alignment requests, provided that the nonterminal matches with the root symbol of the requested translation. The weight of the new pre-translation is obtained simply by multiplying the rule weight and the weights of the selected weighted pretranslations. The overall process is illustrated in Figures 5 and 6. 3 Translation Model Given a source language sentence e, our translation model aims to find the best corresponding target language translation ˆg;7 i.e., ˆg = arg maxg p(g|e) . We estimate the probability p(g|e) through a loglinear combination of component models with parameters λm scored on the pre-translations ⟨t, (u)⟩ such that the leaves of t concatenated read e.8 p(g|e) ∝ 7 Y m=1 hm ⟨t, (u)⟩ λm Our model uses the following features hm(⟨t, (u1, . . . , uk)⟩) for a general pre-translation τ = ⟨t, (u1, . . . , uk)⟩: 7Our main translation direction is English to German. 8Actually, t must embed in the parse tree of e; see Section 4. (1) The forward translation weight using the rule weights as described in Section 2 (2) The indirect translation weight using the rule weights as described in Section 2 (3) Lexical translation weight source →target (4) Lexical translation weight target →source (5) Target side language model (6) Number of words in the target sentences (7) Number of rules used in the pre-translation (8) Number of target side sequences; here k times the number of sequences used in the pretranslations that constructed τ (gap penalty) The rule weights required for (1) are relative frequencies normalized over all rules with the same left-hand side. In the same fashion the rule weights required for (2) are relative frequencies normalized over all rules with the same righthand side. Additionally, rules that were extracted at most 10 times are discounted by multiplying the rule weight by 10−2. The lexical weights for (2) and (3) are obtained by multiplying the word translations w(gi|ej) [respectively, w(ej|gi)] of lexically aligned words (gi, ej) accross (possibly discontiguous) target side sequences.9 Whenever a source word ej is aligned to multiple target words, we average over the word translations.10 h3(⟨t, (u1, . . . , uk)⟩) = Y lexical item e occurs in t average {w(g|e) | g aligned to e} The computation of the language model estimates for (6) is adapted to score partial translations consisting of discontiguous units. We explain the details in Section 4. Finally, the count c of target sequences obtained in (7) is actually used as a score 1001−c. This discourages rules with many target sequences. 9The lexical alignments are different from the alignments used with a pre-translation. 10If the word ej has no alignment to a target word, then it is assumed to be aligned to a special NULL word and this alignment is scored. 814 Combining a rule with pre-translations: NP JJ NNS →  NP ADJA NN  JJ Official →  ADJA Offizielle  NNS forecasts →  NN Prognosen  Obtained new pre-translation: NP JJ Official NNS forecasts →  NP ADJA Offizielle NN Prognosen  Figure 5: Simple rule application. Combining a rule with pre-translations: S NP VBD NP →  S NP VAFIN PP VVPP  NP JJ Official NNS forecasts →  NP ADJA Offizielle NN Prognosen  VBD predicted →  VAFIN sind , VVPP ausgegangen  NP QP RB just CD 3 NN % →  PP von AP ADV nur CARD 3 NN %  Obtained new pre-translation: S NP JJ Official NNS forecasts VBD predicted NP QP RB just CD 3 NN % → S NP ADJA Offizielle NN Prognosen VAFIN sind PP von AP ADV nur CARD 3 NN % VVPP ausgegangen ! Figure 6: Complex rule application. S NP VAFIN PP VVPP Offizielle Prognosen  sind , ausgegangen  von nur 3 % Figure 7: Illustration of LM scoring. 815 4 Decoding We implemented our model in the syntax-based component of the Moses open-source toolkit by Koehn et al. (2007) and Hoang et al. (2009). The standard Moses syntax-based decoder only handles SCFG rules; i.e, rules with contiguous components on the source and the target language side. Roughly speaking, SCFG rules are ℓMBOT rules with exactly one output tree. We thus had to extend the system to support our ℓMBOT rules, in which arbitrarily many output trees are allowed. The standard Moses syntax-based decoder uses a CYK+ chart parsing algorithm, in which each source sentence is parsed and contiguous spans are processed in a bottom-up fashion. A rule is applicable11 if the left-hand side of it matches the nonterminal assigned to the full span by the parser and the (non-)terminal assigned to each subspan.12 In order to speed up the decoding, cube pruning (Chiang, 2007) is applied to each chart cell in order to select the most likely hypotheses for subspans. The language model (LM) scoring is directly integrated into the cube pruning algorithm. Thus, LM estimates are available for all considered hypotheses. To accommodate ℓMBOT rules, we had to modify the Moses syntax-based decoder in several ways. First, the rule representation itself is adjusted to allow sequences of shallow output trees on the target side. Naturally, we also had to adjust hypothesis expansion and, most importantly, language model scoring inside the cube pruning algorithm. An overview of the modified pruning procedure is given in Algorithm 1. The most important modifications are hidden in lines 5 and 8. The expansion in Line 5 involves matching all nonterminal leaves in the rule as defined in Definition 2, which includes matching all leaf nonterminals in all (discontiguous) output trees. Because the output trees can remain discontiguous after hypothesis creation, LM scoring has to be done individually over all output trees. Algorithm 2 describes our LM scoring in detail. In it we use k strings w1, . . . , wk to collect the lexical information from the k output com11Note that our notion of applicable rules differs from the default in Moses. 12Theoretically, this allows that the decoder ignores unary parser nonterminals, which could also disappear when we make our rules shallow; e.g., the parse tree left in the pretranslation of Figure 5 can be matched by a rule with lefthand side NP(Official, forecasts). Algorithm 1 Cube pruning with ℓMBOT rules Data structures: - r[i, j]: list of rules matching span e[i . . . j] - h[i, j]: hypotheses covering span e[i . . . j] - c[i, j]: cube of hypotheses covering span e[i . . . j] 1: for all ℓMBOT rules ρ covering span e[i . . . j] do 2: Insert ρ into r[i, j] 3: Sort r[i, j] 4: for all (l →ψ r) ∈r[i, j] do 5: Create h[i, j] by expanding all nonterminals in l with best scoring hypotheses for subspans 6: Add h[i, j] to c[i, j] 7: for all hypotheses h ∈c[i, j] do 8: Estimate LM score for h // see Algorithm 2 9: Estimate remaining feature scores 10: Sort c[i, j] 11: Retrieve first α elements from c[i, j] // we use α = 103 ponents (u1, . . . , uk) of a rule. These strings can later be rearranged in any order, so we LM-score all of them separately. Roughly speaking, we obtain wi by traversing ui depth-first left-to-right. If we meet a lexical element (terminal), then we add it to the end of wi. On the other hand, if we meet a nonterminal, then we have to consult the best pre-translation τ ′ = ⟨t′, (u′ 1, . . . , u′ k′)⟩, which will contribute the subtree at this position. Suppose that u′ j will be substituted into the nonterminal in question. Then we first LM-score the pretranslation τ ′ to obtain the string w′ j corresponding to u′ j. This string w′ j is then appended to wi. Once all the strings are built, we score them using our 4-gram LM. The overall LM score for the pretranslation is obtained by multiplying the scores for w1, . . . , wk. Clearly, this treats w1, . . . , wk as k separate strings, although they eventually will be combined into a single string. Whenever such a concatenation happens, our LM scoring will automatically compute n-gram LM scores based on the concatenation, which in particular means that the LM scores get more accurate for larger spans. Finally, in the final rule only one component is allowed, which yields that the LM indeed scores the complete output sentence. Figure 7 illustrates our LM scoring for a pretranslation involving a rule with two (discontiguous) target sequences (the construction of the pretranslation is illustrated in Figure 6). When processing the rule rooted at S, an LM estimate is computed by expanding all nonterminal leaves. In our case, these are NP, VAFIN, PP, and VVPP. However, the nodes VAFIN and VVPP are assembled from a (discontiguous) tree sequence. This means that those units have been considered as in816 Algorithm 2 LM scoring Data structures: - (u1, . . . , uk): right-hand side of a rule - (w1, . . . , wk): k strings all initially empty 1: score = 1 2: for all 1 ≤i ≤k do 3: for all leaves ℓin ui (in lexicographic order) do 4: if ℓis a terminal then 5: Append ℓto wi 6: else 7: LM score the best hypothesis for the subspan 8: Expand wi by the corresponding w′ j 9: score = score · LM(wi) dependent until now. So far, the LM scorer could only score their associated unigrams. However, we also have their associated strings w′ 1 and w′ 2, which can now be used. Since VAFIN and VVPP now become parts of a single tree, we can perform LM scoring normally. Assembling the string we obtain Offizielle Prognosen sind von nur 3 % ausgegangen which is scored by the LM. Thus, we first score the 4-grams “Offizielle Prognosen sind von”, then “Prognosen sind von nur”, etc. 5 Experiments 5.1 Setup The baseline system for our experiments is the syntax-based component of the Moses opensource toolkit of Koehn et al. (2007) and Hoang et al. (2009). We use linguistic syntactic annotation on both the source and the target language side (tree-to-tree). Our contrastive system is the ℓMBOT-based translation system presented here. We provide the system with a set of SCFG as well as ℓMBOT rules. We do not impose any maximal span restriction on either system. The compared systems are evaluated on the English-to-German13 news translation task of WMT 2009 (Callison-Burch et al., 2009). For both systems, the used training data is from the 4th version of the Europarl Corpus (Koehn, 2005) and the News Commentary corpus. Both translation models were trained with approximately 1.5 million bilingual sentences after length-ratio filtering. The word alignments were generated by GIZA++ (Och and Ney, 2003) with the growdiag-final-and heuristic (Koehn et al., 2005). The 13Note that our ℓMBOT-based system can be applied to any language pair as it involves no language-specific engineering. System BLEU Baseline 12.60 ℓMBOT ∗13.06 Moses t-to-s 12.72 Table 1: Evaluation results. The starred results are statistically significant improvements over the Baseline (at confidence p < 0.05). English side of the bilingual data was parsed using the Charniak parser of Charniak and Johnson (2005), and the German side was parsed using BitPar (Schmid, 2004) without the function and morphological annotations. Our German 4gram language model was trained on the German sentences in the training data augmented by the Stuttgart SdeWaC corpus (Web-as-Corpus Consortium, 2008), whose generation is detailed in (Baroni et al., 2009). The weights λm in the log-linear model were trained using minimum error rate training (Och, 2003) with the News 2009 development set. Both systems use glue-rules, which allow them to concatenate partial translations without performing any reordering. 5.2 Results We measured the overall translation quality with the help of 4-gram BLEU (Papineni et al., 2002), which was computed on tokenized and lowercased data for both systems. The results of our evaluation are reported in Table 1. For comparison, we also report the results obtained by a system that utilizes parses only on the source side (Moses tree-to-string) with its standard features. We can observe from Table 1 that our ℓMBOTbased system outperforms the baseline. We obtain a BLEU score of 13.06, which is a gain of 0.46 BLEU points over the baseline. This improvement is statistically significant at confidence p < 0.05, which we computed using the pairwise bootstrap resampling technique of Koehn (2004). Our system is also better than the Moses tree-tostring system. However this improvement (0.34) is not statistically significant. In the next section, we confirm the result of the automatic evaluation through a manual examination of some translations generated by our system and the baseline. In Table 2, we report the number of ℓMBOT rules used by our system when decoding the test set. By lex we denote rules containing only lexical 817 lex non-term total contiguous 23,175 18,355 41,530 discontiguous 315 2,516 2,831 Table 2: Number of rules used in decoding test (lex: only lexical items; non-term: at least one nonterminal). 2-dis 3-dis 4-dis 2,480 323 28 Table 3: Number of k-discontiguous rules. items. The label non-term stands for rules containing at least one leaf nonterminal. The results show that approx. 6% of all rules used by our ℓMBOTsystem have discontiguous target sides. Furthermore, the reported numbers show that the system also uses rules in which lexical items are combined with nonterminals. Finally, Table 3 presents the number of rules with k target side components used during decoding. 5.3 Linguistic Analysis In this section we present linguistic evidence supporting the fact that the ℓMBOT-based system significantly outperforms the baseline. All examples are taken from the translation of the test set used for automatic evaluation. We show that when our system generates better translations, this is directly related to the use of ℓMBOT rules. Figures 8 and 9 show the ability of our system to correctly reorder multiple segments in the source sentence where the baseline translates those segments sequentially. An analysis of the generated derivations shows that our system produces the correct translation by taking advantage of rules with discontiguous units on target language side. The rules used in the presented derivations are displayed in Figures 10 and 11. In the first example (Figure 8), we begin by translating “((smuggle)VB (eight projectiles)NP (into the kingdom)PP)VP” into the discontiguous sequence composed of (i) “(acht geschosse)NP” ; (ii) “(in das k¨onigreich)PP” and (iii) “(schmuggeln)VP”. In a second step we assemble all sequences in a rule with contiguous target language side and, at the same time, insert the word “(zu)PTKZU” between “(in das k¨onigreich)PP” and “(schmuggeln)VP”. The second example (Figure 9) illustrates a more complex reordering. First, we transVP VB NP PP →  NP NP , PP PP , VVINF VVINF  S TO VP →  VP NP PP PTKZU VVINF  Figure 10: Used ℓMBOT rules for verbal reordering VP ADV commented on NP →  NP NP , ADV ADV , VPP kommentiert  VP VBZ VP →  NP NP , VAFIN VAFIN , ADV ADV , VPP VPP  TOP NP VP →  TOP NP VAFIN NP ADV VVPP  Figure 11: Used ℓMBOT rules for verbal reordering late “((again)ADV commented on (the problem of global warming)NP)VP” into the discontiguous sequence composed of (i) “(das problem der globalen erw¨armung)NP”; (ii) “(wieder)ADV” and (iii) “(kommentiert)VPP”. In a second step, we translate the auxiliary “(has)VBZ” by inserting “(hat)VAFIN” into the sequence. We thus obtain, for the input segment “((has)VBZ (again)ADV commented on (the problem of global warming)NP)VP”, the sequence (i) “(das problem der globalen erw¨armung)NP”; (ii) “(hat)VAFIN”; (iii) “(wieder)ADV”; (iv) “(kommentiert)VVPP”. In a last step, the constituent “(president v´aclav klaus)NP” is inserted between the discontiguous units “(hat)VAFIN” and “(wieder)ADV” to form the contiguous sequence “((das problem der globalen erw¨armung)NP (hat)VAFIN (pr¨asident v´aclav klaus)NP (wieder)ADV (kommentiert)VVPP)TOP”. Figures 12 and 13 show examples where our system generates complex words in the target language out of a simple source language word. Again, an analysis of the generated derivation shows that ℓMBOT takes advantage of rules having several target side components. Examples of such rules are given in Figure 14. Through its ability to use these discontiguous rules, our system correctly translates into reflexive or particle verbs such as “konzentriert sich” (for the English “focuses”) or “besteht darauf” (for the English “insist”). Another phenomenon well handled by our system are relative pronouns. Pronouns such as “that” or “whose” are systematically translated 818 . . . geplant hatten 8 geschosse in das k¨onigreich zu schmuggeln . . . had planned to smuggle 8 projectiles into the kingdom . . . vorhatten zu schmuggeln 8 geschosse in das k¨onigreich Figure 8: Verbal Reordering (top: our system, bottom: baseline) das problem der globalen erw¨armung hat pr¨asident v´aclav klaus wieder kommentiert president v´aclav klaus has again commented on the problem of global warming pr¨asident v´aclav klaus hat wieder kommentiert das problem der globalen erw¨armung Figure 9: Verbal Reordering (top: our system, bottom: baseline) . . . die serbische delegation bestand darauf , dass jede entscheidung . . . . . . the serbian delegation insisted that every decision . . . . . . die serbische delegation bestand , jede entscheidung . . . Figure 12: Relative Clause (top: our system, bottom: baseline) . . . die roadmap von bali , konzentriert sich auf die bem¨uhungen . . . . . . the bali roadmap that focuses on efforts . . . . . . die bali roadmap , konzentriert auf bem¨uhungen . . . Figure 13: Reflexive Pronoun (top: our system, bottom: baseline) into both both, “,” and “dass” or “,” and “deren” (Figure 12). 6 Conclusion and Future Work We demonstrated that our ℓMBOT-based machine translation system beats a standard tree-to-tree system (Moses tree-to-tree) on the WMT 2009 translation task English →German. To achieve this we implemented the formal model as described in Section 2 inside the Moses machine translation toolkit. Several modifications were necessary to obtain a working system. We publicly release all our developed software and our complete tool-chain to allow independent experiments and evaluation. This includes our ℓMBOT decoder IN that →  $, , , KOUS dass  VBZ focuses →  VVFIN konzentriert , PRF sich  Figure 14: ℓMBOT rules generating a relative clause/reflexive pronoun presented in Section 4 and a separate C++ module that we use for rule extraction (see Section 3). Besides the automatic evaluation, we also performed a small manual analysis of obtained translations and show-cased some examples (see Section 5.3). We argue that our ℓMBOT approach can adequately handle discontiguous phrases, which occur frequently in German. Other languages that exhibit such phenomena include Czech, Dutch, Russian, and Polish. Thus, we hope that our system can also successfully be applied for other language pairs, which we plan to pursue as well. In other future work, we want to investigate full backwards application of ℓMBOT rules, which would be more suitable for the converse translation direction German →English. The current independent LM scoring of components has some negative side-effects that we plan to circumvent with the use of lazy LM scoring. Acknowledgement The authors thank Alexander Fraser for his ongoing support and advice. All authors were financially supported by the German Research Foundation (DFG) grant MA 4959 / 1-1. 819 References Andr´e Arnold and Max Dauchet. 1982. Morphismes et bimorphismes d’arbres. Theoret. Comput. Sci., 20(1):33–93. Marco Baroni, Silvia Bernardini, Adriano Ferraresi, and Eros Zanchetta. 2009. The WaCky Wide Web: A collection of very large linguistically processed web-crawled corpora. Language Resources and Evaluation, 43(3):209–226. Chris Callison-Burch, Philipp Koehn, Christof Monz, and Josh Schroeder. 2009. Findings of the 2009 Workshop on Statistical Machine Translation. In Proc. 4th Workshop on Statistical Machine Translation, pages 1–28. Eugene Charniak and Mark Johnson. 2005. Coarseto-fine n-best parsing and MaxEnt discriminative reranking. In Proc. 43rd ACL, pages 173–180. David Chiang. 2007. Hierarchical phrase-based translation. Computat. Linguist., 33(2):201–228. David Chiang. 2010. Learning to translate with source and target syntax. In Proc. 48th ACL, pages 1443– 1452. Jason Eisner. 2003. Learning non-isomorphic tree mappings for machine translation. In Proc. 41st ACL, pages 205–208. Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. What’s in a translation rule? In Proc. HLT-NAACL, pages 273–280. Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve Deneefe, Wei Wang, and Ignacio Thayer. 2006. Scalable inference and training of context-rich syntactic translation models. In Proc. 44th ACL, pages 961–968. Hieu Hoang, Philipp Koehn, and Adam Lopez. 2009. A unified framework for phrase-based, hierarchical, and syntax-based statistical machine translation. In Proc. 6th Int. Workshop Spoken Language Translation, pages 152–159. Liang Huang, Kevin Knight, and Aravind Joshi. 2006. Statistical syntax-directed translation with extended domain of locality. In Proc. 7th Conf. Association for Machine Translation of the Americas, pages 66– 73. Philip Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proc. HLT-NAACL, pages 127–133. Philipp Koehn, Amittai Axelrod, Alexandra Birch Mayne, Chris Callison-Burch, Miles Osborne, and David Talbot. 2005. Edinburgh system description for the 2005 IWSLT Speech Translation Evaluation. In Proc. 2nd Int. Workshop Spoken Language Translation. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proc. ACL, pages 177–180. Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proc. EMNLP, pages 388–395. Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In Proc. 10th Machine Translation Summit, pages 79–86. Alon Lavie, Alok Parlikar, and Vamshi Ambati. 2008. Syntax-driven learning of sub-sentential translation equivalents and translation rules from parsed parallel corpora. In Proc. 2nd ACL Workshop on Syntax and Structure in Statistical Translation, pages 87–95. Eric Lilin. 1978. Une g´en´eralisation des transducteurs d’´etats finis d’arbres: les S-transducteurs. Th`ese 3`eme cycle, Universit´e de Lille. Yang Liu, Qun Liu, and Shouxun Lin. 2006. Treeto-string alignment template for statistical machine translation. In Proc. 44th ACL, pages 609–616. Yang Liu, Yajuan L¨u, and Qun Liu. 2009. Improving tree-to-tree translation with packed forests. In Proc. 47th ACL, pages 558–566. Andreas Maletti. 2010. Why synchronous tree substitution grammars? In Proc. HLT-NAACL, pages 876–884. Andreas Maletti. 2011. How to train your multi bottom-up tree transducer. In Proc. 49th ACL, pages 825–834. Andreas Maletti. 2012. Every sensible extended topdown tree transducer is a multi bottom-up tree transducer. In Proc. HLT-NAACL, pages 263–273. Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computat. Linguist., 29(1):19–51. Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proc. 41st ACL, pages 160–167. Kishore Papineni, Salim Roukos, Todd Ward, and Wei jing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proc. 40th ACL, pages 311–318. Jean-Claude Raoult. 1997. Rational tree relations. Bull. Belg. Math. Soc. Simon Stevin, 4(1):149–176. Helmut Schmid. 2004. Efficient parsing of highly ambiguous context-free grammars with bit vectors. In Proc. 20th COLING, pages 162–168. 820 Jun Sun, Min Zhang, and Chew Lim Tan. 2009. A noncontiguous tree sequence alignment-based model for statistical machine translation. In Proc. 47th ACL, pages 914–922. Web-as-Corpus Consortium. 2008. SDeWaC — a 0.88 billion word corpus for german. Website: http: //wacky.sslmit.unibo.it/doku.php. Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computat. Linguist., 23(3):377–403. Min Zhang, Hongfei Jiang, Aiti Aw, Haizhou Li, Chew Lim Tan, and Sheng Li. 2008a. A tree sequence alignment-based tree-to-tree translation model. In Proc. 46th ACL, pages 559–567. Min Zhang, Hongfei Jiang, Haizhou Li, Aiti Aw, and Sheng Li. 2008b. Grammar comparison study for translational equivalence modeling and statistical machine translation. In Proc. 22nd International Conference on Computational Linguistics, pages 1097–1104. 821
2013
80
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 822–831, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Enlisting the Ghost: Modeling Empty Categories for Machine Translation Bing Xiang IBM T. J. Watson Research Center 1101 Kitchawan Rd Yorktown Heights, NY 10598 [email protected] Xiaoqiang Luo * Google Inc. 111 8th Ave New York, NY 10011 [email protected] Bowen Zhou IBM T. J. Watson Research Center 1101 Kitchawan Rd Yorktown Heights, NY 10598 [email protected] Abstract Empty categories (EC) are artificial elements in Penn Treebanks motivated by the government-binding (GB) theory to explain certain language phenomena such as pro-drop. ECs are ubiquitous in languages like Chinese, but they are tacitly ignored in most machine translation (MT) work because of their elusive nature. In this paper we present a comprehensive treatment of ECs by first recovering them with a structured MaxEnt model with a rich set of syntactic and lexical features, and then incorporating the predicted ECs into a Chinese-to-English machine translation task through multiple approaches, including the extraction of EC-specific sparse features. We show that the recovered empty categories not only improve the word alignment quality, but also lead to significant improvements in a large-scale state-of-the-art syntactic MT system. 1 Introduction One of the key challenges in statistical machine translation (SMT) is to effectively model inherent differences between the source and the target language. Take the Chinese-English SMT as an example: it is non-trivial to produce correct pronouns on the target side when the source-side pronoun is missing. In addition, the pro-drop problem can also degrade the word alignment quality in the training data. A sentence pair observed in the real data is shown in Figure 1 along with the word alignment obtained from an automatic word aligner, where the English subject pronoun * This work was done when the author was with IBM. “that” is missing on the Chinese side. Consequently, “that” is incorrectly aligned to the second to the last Chinese word “De”, due to their high co-occurrence frequency in the training data. If the dropped pronoun were recovered, “that” would have been aligned with the dropped-pro (cf. Figure 3), which is a much more sensible alignment. Figure 1: Example of incorrect word alignment due to missing pronouns on the Chinese side. In order to account for certain language phenomena such as pro-drop and wh-movement, a set of special tokens, called empty categories (EC), are used in Penn Treebanks (Marcus et al., 1993; Bies and Maamouri, 2003; Xue et al., 2005). Since empty categories do not exist in the surface form of a language, they are often deemed elusive and recovering ECs is even figuratively called “chasing the ghost” (Yang and Xue, 2010). In this work we demonstrate that, with the availability of large-scale EC annotations, it is feasible to predict and recover ECs with high accuracy. More importantly, with various approaches of modeling the recovered ECs in SMT, we are able to achieve significant improvements1. The contributions of this paper include the following: • Propose a novel structured approach to EC prediction, including the exact word-level lo1Hence “Enlisting the ghost” in the title of this paper. 822 cation and EC labels. Our results are significantly higher in accuracy than that of the state-of-the-art; • Measure the effect of ECs on automatic word alignment for machine translation after integrating recovered ECs into the MT data; • Design EC-specific features for phrases and syntactic tree-to-string rules in translation grammar; • Show significant improvement on top of the state-of-the-art large-scale hierarchical and syntactic machine translation systems. The rest of the paper is organized as follows. In Section 2, we present a structured approach to EC prediction. In Section 3, we describe the integration of Chinese ECs in MT. The experimental results for both EC prediction and SMT are reported in Section 4. A survey on the related work is conducted in Section 5, and Section 6 summarizes the work and introduces some future work. 2 Chinese Empty Category Prediction The empty categories in the Chinese Treebank (CTB) include trace markers for A’- and Amovement, dropped pronoun, big PRO etc. A complete list of categories used in CTB is shown in Table 1 along with their intended usages. Readers are referred to the documentation (Xue et al., 2005) of CTB for detailed discussions about the characterization of empty categories. EC Meaning *T* trace of A’-movement * trace of A-movement *PRO* big PRO in control structures *pro* pro-drop *OP* operator in relative clauses *RNR* for right node raising Table 1: List of empty categories in the CTB. In this section, we tackle the problem of recovering Chinese ECs. The problem has been studied before in the literature. For instance, Yang and Xue (2010) attempted to predict the existence of an EC before a word; Luo and Zhao (2011) predicted ECs on parse trees, but the position information of some ECs is partially lost in their representation. Furthermore, Luo and Zhao (2011) conducted experiments on gold parse trees only. In our opinion, recovering ECs from machine parse trees is more meaningful since that is what one would encounter when developing a downstream application such as machine translation. In this paper, we aim to have a more comprehensive treatment of the problem: all EC types along with their locations are predicted, and we will report the results on both human parse trees and machinegenerated parse trees. 2.1 Representation of Empty Categories Our effort of recovering ECs is a two-step process: first, at training time, ECs in the Chinese Treebank are moved and preserved in the portion of the tree structures pertaining to surface words only. Original ECs and their subtrees are then deleted without loss of information; second, a model is trained on transformed trees to predict and recover ECs. Empty categories heavily depend on syntactic tree structure. For this reason, we choose to project them onto a parse tree node. To facilitate presentation, we first distinguish a solid vs. an empty non-terminal node. A non-terminal node is solid if and only if it contains at least one child node that spans one or more surface words (as opposed to an EC); accordingly, an empty node is a non-terminal node that spans only ECs. In the left half of Figure 2, the NP node that is the immediate child of IP has only one child node spanning an EC – (-NONE- *pro*), and is thus an empty node; while all other non-terminal nodes have at least one surface word as their child and are thus all solid nodes. We decide to attach an EC to its lowest solid ancestor node. That is, the EC is moved up to the first solid node in the syntactic tree. After ECs are attached, all empty nodes and ECs are deleted from the tree. In order to uniquely recover ECs, we also need to encode the position information. To this end, the relative child index of an EC is affixed to the EC tag. Take the NP node spanning the *pro* in Figure 2 as an example, the *pro* is moved to the lowest solid ancestor, IP node, and its position is encoded by @1 since the deleted NP is the second child of the IP node (we use 0based indices). With this transformation, we are able to recover not only the position of an EC, but its type as well. A special tag NULL is attached to non-terminal nodes without EC. Since an EC is introduced to express the structure of a sentence, it is a good practice to associate it with the syn823 Figure 2: Example of tree transformation on training data to encode an empty category and its position information. tactic tree, as opposed to simply attaching it to a neighboring word, as was done in (Yang and Xue, 2010). We believe this is one of the reasons why our model has better accuracy than that of (Yang and Xue, 2010) (cf. Table 7). In summary, a projected tag consists of an EC type (such as *pro*) and the EC’s position information. The problem of predicting ECs is then cast into predicting an EC tag at each non-terminal node. Notice that the input to such a predictor is a syntactic tree without ECs, e.g., the parse tree on the right hand of Figure 2 without the EC tag *pro*@1 is such an example. 2.2 A Structured Empty Category Model We propose a structured MaxEnt model for predicting ECs. Specially, given a syntactic tree, T, whose ECs have been projected onto solid nodes with the procedure described in Section 2.1, we traverse it in post-order (i.e., child nodes are visited recursively first before the current node is visited). Let T = t1t2 · · · tn be the sequence of nodes produced by the post-order traversal, and ei(i = 1, 2, · · · , n) be the EC tag associated with ti. The probabilistic model is then: P(en 1|T) = n Y i=1 P(ei|T, ei−1 1 ) = n Y i=1 exp P k λkfk(ei−1 1 , T, ei)  Z(ei−1 1 , T) (1) Eq. (1) is the familiar log linear (or MaxEnt) model, where fk(ei−1 1 , T, ei) is the feature function and Z(ei−1 1 , T) = P e∈E exp P k λkfk(ei−1 1 , T, e)  is the normalization factor. E is the set of ECs to be predicted. In the CTB 7.0 processed by the procedure in Section 2.1, the set consists of 32 EC tags plus a special NULL symbol, obtained by modulating the list of ECs in Table 1 with their positions (e.g., *pro*@1 in Figure 2). Once the model is chosen, the next step is to decide a set of features {fk(ei−1 1 , T, ei)} to be used in the model. One advantage of having the representation in Section 2.1 is that it is very easy to compute features from tree structures. Indeed, all features used in our system are computed from the syntactic trees, including lexical features. There are 3 categories of features used in the model: (1) tree label features; (2) lexical features; (3) EC features, and we list them in Table 2. In the feature description column, all node positions (e.g., “left”, “right”) are relative to the current node being predicted. Feature 1 to 10 are computed directly from parse trees, and are straightforward. We include up to 2 siblings when computing feature 9 and 10. Feature 11 to 17 are lexical features. Note that we use words at the edge of the current node: feature 11 and 12 are words at the internal boundary of the current node, while feature 13 and 14 are the immediately neighboring word external to the current node. Feature 15 and 17 are from head word information of the current node and the parent node. Feature 18 and 19 are computed from predicted ECs in the past – that’s why the model in Eq. (1) conditions on ei−1 1 . Besides the features presented in Table 2, we also use conjunction features between the current node label with the parent node label; the current node label with features computed from child nodes; the current node label with features from left and sibling nodes; the current node label with lexical features. 824 No. Tree Label Features 1 current node label 2 parent node label 3 grand-parent node label 4 left-most child label or POS tag 5 right-most child label or POS tag 6 label or POS tag of the head child 7 the number of child nodes 8 one level CFG rule 9 left-sibling label or POS tag 10 right-sibling label or POS tag Lexical Features 11 left-most word under the current node 12 right-most word under the current node 13 word immediately left to the span of the current node 14 word immediately right to the span of the current node 15 head word of the current node 16 head word of the parent node 17 is the current node head child of its parent? EC Features 18 predicted EC of the left sibling 19 the set of predicted ECs of child nodes Table 2: List of features. 3 Integrating Empty Categories in Machine Translation In this section, we explore multiple approaches of utilizing recovered ECs in machine translation. 3.1 Explicit Recovery of ECs in MT We conducted some initial error analysis on our MT system output and found that most of the errors that are related to ECs are due to the missing *pro* and *PRO*. This is also consistent with the findings in (Chung and Gildea, 2010). One of the other frequent ECs, *OP*, appears in the Chinese relative clauses, which usually have a Chinese word “De” aligned to the target side “that” or “which”. And the trace, *T*, exists in both Chinese and English sides. For MT we want to focus on the places where there exist mismatches between the source and target languages. A straightforward way of utilizing the recovered *pro* and *PRO* is to pre-process the MT training and test data by inserting ECs into the original source text (i.e. Chinese in this case). As mentioned in the previous section, the output of our EC predictor is a new parse tree with the labels and positions encoded in the tags. Based on the positional information in the tags, we can move the predicted ECs down to the surface level and insert them between original source words. The same prediction and “pull-down” procedure can be conducted consistently cross the MT training and test data. 3.2 Grammar Extraction on Augmented Data With the pre-processed MT training corpus, an unsupervised word aligner, such as GIZA++, can be used to generate automatic word alignment, as the first step of a system training pipeline. The effect of inserting ECs is two-fold: first, it can impact the automatic word alignment since now it allows the target-side words, especially the function words, to align to the inserted ECs and fix some errors in the original word alignment; second, new phrases and rules can be extracted from the preprocessed training data. For example, for a hierarchical MT system, some phrase pairs and Hiero (Chiang, 2005) rules can be extracted with recovered *pro* and *PRO* at the Chinese side. In this work we also take advantages of the augmented Chinese parse trees (with ECs projected to the surface) and extract tree-to-string grammar (Liu et al., 2006) for a tree-to-string MT system. Due to the recovered ECs in the source parse trees, the tree-to-string grammar extracted from such trees can be more discriminative, with an increased capability of distinguishing different context. An example of an augmented Chinese parse tree aligned to an English string is shown in Figure 3, in which the incorrect alignment in Figure 1 is fixed. A few examples of the extracted Hiero rules and tree-to-string rules are also listed, which we would not have been able to extract from the original incorrect word alignment when the *pro* was missing. 3.3 Soft Recovery: EC-Specific Sparse Features Recovered ECs are often good indicators of what hypothesis should be chosen during decoding. In addition to the augmented syntax-based grammar, we propose sparse features as a soft constraint to boost the performance. For each phrase pair, Hiero rule or tree-to-string rule in the MT system, a binary feature fk fires if there exists a *pro* on the source side and it aligns to one of its most frequently aligned target words found in the training corpus. We also fire another feature if *pro* 825 Figure 3: Fixed word alignment and examples of extracted Hiero rules and tree-to-string rules. aligns to any other target words so the model can choose to penalize them based on a tuning set. Similar features can fire for *PRO*. The feature weights can be tuned on a tuning set in a log-linear model along with other usual features/costs, including language model scores, bi-direction translation probabilities, etc. The motivation for such sparse features is to reward those phrase pairs and rules that have highly confident lexical pairs specifically related to ECs, and penalize those who don’t have such lexical pairs. Table 3 listed some of the most frequent English words aligned to *pro* or *PRO* in a ChineseEnglish parallel corpus with 2M sentence pairs. Their co-occurrence counts and the lexical translation probabilities are also shown in the table. In total we use 15 sparse features for frequent lexical pairs, including 13 for *pro* and 2 for *PRO*, and two more features for any other target words that align to *pro* or *PRO*. Source Target Counts P(t|s) *pro* the 93100 0.11 *pro* to 86965 0.10 *pro* it 45423 0.05 *pro* in 36129 0.04 *pro* we 24509 0.03 *pro* which 17259 0.02 *PRO* to 195464 0.32 *PRO* for 31200 0.05 Table 3: Example of frequent word pairs used for sparse features. 4 Experimental Results 4.1 Empty Category Prediction We use Chinese Treebank (CTB) v7.0 to train and test the EC prediction model. We partition the data into training, development and test sets. The training set includes 32925 sentences from CTB files 0001-0325, 0400-0454, 0500-0542, 06000840, 0590-0596, 1001-1120, 2000-3000, cctv, cnn, msnbc, and phoenix 00-06. The development set has 3033 sentences, from files 0549-0554, 0900-0931, 1136-1151, 3076-3145, and phoenix 10-11. The test set contains 3297 sentences, from files 0543-0548, 0841-0885, 1121-1135, 30013075, and phoenix 07-09. To measure the accuracy of EC prediction, we project the predicted tags from the upper level nodes in the parse trees down to the surface level based on the position information encoded in the tags. The position index for each inserted EC, counted at the surface level, is attached for scoring purpose. The same operation is applied on both the reference and the system output trees. Such projection is necessary, especially when the two trees differ in structure (e.g. gold trees vs. machine-generated trees). We compute the precision, recall and F1 scores for each EC on the test set, and collect their counts in the reference and system output. The results are shown in Table 4, where the LDC gold parse trees are used to extract syntactic features for the model. The first row in the table shows the accuracy for the places where no EC should be inserted. The predictor achieves 99.5% F1 score for this category, with limited number of missing or false positives. The F1 scores for majority of the ECs are above 70%, except for “*”, which is relatively rare in the data. For the two categories that are interesting to MT, *pro* and *PRO*, the predictor achieves 74.3% and 81.5% in F1 scores, respectively. The results reported above are based on the LDC gold parse trees. To apply the EC prediction to NLP applications, such as MT, it is impossible to always rely on the gold trees due to its limited availability. We parse our test set with a maximum entropy based statistical parser (Ratnaparkhi, 1997) first. The parser accuracy is around 84% on the test set. Then we extract features based on the system-generated parse trees, and decode with the previously trained model. The results are shown in Table 5. Compared to those in Table 4, the F1 scores dropped by different degrees for dif826 Tag Ref Sys P R F1 NULL 75159 75508 99.3 99.7 99.5 *pro* 1692 1442 80.8 68.9 74.3 *PRO* 1410 1282 85.6 77.8 81.5 *T* 1851 1845 82.8 82.5 82.7 *OP* 1721 1853 90.9 97.9 94.2 *RNR* 51 39 87.2 66.7 75.6 * 156 96 63.5 39.1 48.4 Table 4: Prediction accuracy with gold parse trees, where NULL represents the cases where no ECs should be produced. ferent types. Such performance drop is expected since the system relies heavily on syntactic structure, and parsing errors create an inherent mismatching condition between the training and testing time. The smallest drop among all types is on NULL, at about 1.6%. The largest drop occurs for *OP*, at 27.1%, largely due to the parsing errors on the CP nodes. The F1 scores for *pro* and *PRO* when using system-generated parse trees are between 50% to 60%. Tag Precision Recall F1 NULL 97.6 98.2 97.9 *pro* 51.1 50.1 50.6 *PRO* 66.4 50.5 57.3 *T* 68.2 59.9 63.8 *OP* 66.8 67.3 67.1 *RNR* 70.0 54.9 61.5 * 60.9 35.9 45.2 Table 5: Prediction accuracy with systemgenerated parse trees. To show the effect of ECs other than *pro* and *PRO*, we remove all ECs in the training data except *pro* and *PRO*. So the model only predicts NULL, *pro* or *PRO*. The results on the test set are listed in Table 6. There is 0.8% and 0.5% increase on NULL and *pro*, respectively. The F1 score for *PRO* drops by 0.2% slightly. As mentioned earlier, for MT we focus on recovering *pro* and *PRO* only. The model generating the results in Table 6 is the one we applied in our MT experiments reported later. In order to compare to the state-of-the-art models to see where our model stands, we switch our training, development and test data to those used in the work of (Yang and Xue, 2010) and (Cai et Tag Precision Recall F1 NULL 98.5 98.9 98.7 *pro* 51.0 51.1 51.1 *PRO* 66.0 50.4 57.1 Table 6: Prediction accuracy with systemgenerated parse trees, modeling *pro* and *PRO* only. al., 2011), for the purpose of a direct comparison. The training set includes CTB files 0081 through 0900. The development set includes files 0041 to 0080, and the test set contains files 0001-0040 and 0901-0931. We merge all empty categories into a single type in the training data before training our EC prediction model. To compare the performance on system-generated parse trees, we also train a Berkeley parser on the same training data and parse the test set. The prediction accuracy for such single type on the test set with gold or system-generated parse trees is shown in Table 7, compared to the numbers reported in (Yang and Xue, 2010) and (Cai et al., 2011). The model we proposed achieves 6% higher F1 score than that in (Yang and Xue, 2010) and 2.6% higher than that in (Cai et al., 2011), which is significant. This shows the effectiveness of our structured approach. Model T P R F1 (Yang and Xue, 2010) G 95.9 83.0 89.0 Structured (this work) G 96.5 93.6 95.0 (Yang and Xue, 2010) S 80.3 52.1 63.2 (Cai et al., 2011) S 74.0 61.3 67.0 Structured (this work) S 74.9 65.1 69.6 Table 7: Comparison with the previous results, using the same training and test data. T: parse trees. G: gold parse trees. S: system-generated parse trees. P: precision. R: recall. 4.2 MT Results In the Chinese-to-English MT experiments, we test two state-of-the-art MT systems. One is an reimplementation of Hiero (Chiang, 2005), and the other is a hybrid syntax-based tree-to-string system (Zhao and Al-onaizan, 2008), where normal phrase pairs and Hiero rules are used as a backoff for tree-to-string rules. The MT training data includes 2 million sentence pairs from the parallel corpora released by 827 LDC over the years, with the data from United Nations and Hong Kong excluded 2. The Chinese text is segmented with a segmenter trained on the CTB data using conditional random field (CRF), followed by the longest-substring match segmentation in a second pass. Our language model (LM) training data consists of about 10 billion English words, which includes Gigaword and other newswire and web data released by LDC, as well as the English side of the parallel training corpus. We train a 6-gram LM with modified Kneser-Ney smoothing (Chen and Goodman, 1998). Our tuning set for MT contains 1275 sentences from LDC2010E30. We test our system on the NIST MT08 Newswire (691 sentences) and Weblog (666 sentences) sets. Both tuning and test sets have 4 sets of references for each sentence. The MT systems are optimized with pairwise ranking optimization (Hopkins and May, 2011) to maximize BLEU (Papineni et al., 2002). We first predict *pro* and *PRO* with our annotation model for all Chinese sentences in the parallel training data, with *pro* and *PRO* inserted between the original Chinese words. Then we run GIZA++ (Och and Ney, 2000) to generate the word alignment for each direction and apply grow-diagonal-final (Koehn et al., 2003), same as in the baseline. We want to measure the impact on the word alignment, which is an important step for the system building. We append a 300-sentence set, which we have human hand alignment available as reference, to the 2M training sentence pairs before running GIZA++. The alignment accuracy measured on this alignment test set, with or without *pro* and *PRO* inserted before running GIZA++, is shown in Table 8. To make a fair comparison with the baseline alignment, any target words aligned to ECs are deemed as unaligned during scoring. We observe 1.2% improvement on function word related links, and almost the same accuracy on content words. This is understandable since *pro* and *PRO* are mostly aligned to the function words at the target side. The precision and recall for function words are shown in Table 9. We can see higher accuracy in both precision and recall when ECs (*pro* and *PRO*) are recovered in the Chinese side. Especially, the precision is improved by 2% absolute. 2The training corpora include LDC2003E07, LDC2003E08, LDC2005T10, LDC2006E26, LDC2006G05, LDC2007E103, LDC2008G05, LDC2009G01, and LDC2009G02. System Function Content All Baseline 51.7 69.7 65.4 +EC 52.9 69.6 65.7 Table 8: Word alignment F1 scores with or without *pro* and *PRO*. System Precision Recall F1 Baseline 54.1 49.5 51.7 +EC 56.0 50.1 52.9 Table 9: Word alignment accuracy for function words only. Next we extract phrase pairs, Hiero rules and tree-to-string rules from the original word alignment and the improved word alignment, and tune all the feature weights on the tuning set. The weights include those for usual costs and also the sparse features proposed in this work specifically for ECs. We test all the systems on the MT08 Newswire and Weblog sets. The BLEU scores from different systems are shown in Table 10 and Table 11, respectively. We measure the incremental effect of prediction (inserting *pro* and *PRO*) and sparse features. Pre-processing of the data with ECs inserted improves the BLEU scores by about 0.6 for newswire and 0.2 to 0.3 for the weblog data, compared to each baseline separately. On top of that, adding sparse features helps by another 0.3 on newswire and 0.2 to 0.4 on weblog. Overall, the Hiero and tree-to-string systems are improved by about 1 point for newswire and 0.4 to 0.7 for weblog. The smaller gain on the weblog data could be due to the more difficult data to parse, which affects the accuracy of EC prediction. All the results in Table 10 and 11 marked with “*” are statistically significant with p < 0.05 using the sign test described in (Collins et al., 2005), compared to the baseline results in each table. Two MT examples are given in Table 12, which show the effectiveness of the recovered ECs in MT. System MT08-nw MT08-wb Hiero 33.99 25.40 +prediction 34.62* 25.63 +prediction+sparse 34.95* 25.80* Table 10: BLEU scores in the Hiero system. 828 System MT08-nw MT08-wb T2S+Hiero 34.53 25.80 +prediction 35.17* 26.08 +prediction+sparse 35.51* 26.53* Table 11: BLEU scores in the tree-to-string system with Hiero rules as backoff. 5 Related Work Empty categories have been studied in recent years for several languages, mostly in the context of reference resolution and syntactic processing for English, such as in (Johnson, 2002; Dienes and Dubey, 2003; Gabbard et al., 2006). More recently, EC recovery for Chinese started emerging in literature. In (Guo et al., 2007), non-local dependencies are migrated from English to Chinese for generating proper predicateargument-modifier structures from surface context free phrase structure trees. In (Zhao and Ng, 2007), a decision tree learning algorithm is presented to identify and resolve Chinese anaphoric zero pronouns. and achieves a performance comparable to a heuristic rule-based approach. Similar to the work in (Dienes and Dubey, 2003), empty detection is formulated as a tagging problem in (Yang and Xue, 2010), where each word in the sentence receives a tag indicating whether there is an EC before it. A maximum entropy model is utilized to predict the tags, but different types of ECs are not distinguished. In (Cai et al., 2011), a language-independent method was proposed to integrate the recovery of empty elements into syntactic parsing. As shown in the previous section, our model outperforms the model in (Yang and Xue, 2010) and (Cai et al., 2011) significantly using the same training and test data. (Luo and Zhao, 2011) also tries to predict the existence of an EC in Chinese sentences, but the ECs in the middle of a tree constituent are lumped into a single position and are not uniquely recoverable. There exists only a handful of previous work on applying ECs explicitly to machine translation so far. One of them is the work reported in (Chung and Gildea, 2010), where three approaches are compared, based on either pattern matching, CRF, or parsing. However, there is no comparison between using gold trees and automatic trees. There also exist a few major differences on the MT part between our work and theirs. First, in addition to the pre-processing of training data and inserting recovered empty categories, we implement sparse features to further boost the performance, and tune the feature weights directly towards maximizing the machine translation metric. Second, there is no discussion on the quality of word alignment in (Chung and Gildea, 2010), while we show the alignment improvement on a hand-aligned set. Last, they use a phase-based system trained on only 60K sentences, while we conduct experiments on more advanced Hiero and tree-to-string systems, trained on 2M sentences in a much larger corpus. We directly take advantage of the augmented parse trees in the tree-to-string grammar, which could have larger impact on the MT system performance. 6 Conclusions and Future Work In this paper, we presented a novel structured approach to EC prediction, which utilizes a maximum entropy model with various syntactic features and shows significantly higher accuracy than the state-of-the-art approaches. We also applied the predicted ECs to a large-scale Chinese-toEnglish machine translation task and achieved significant improvement over two strong MT base829 lines, i.e. a hierarchical phase-based system and a tree-to-string syntax-based system. More work remain to be done next to further take advantages of ECs. For example, the recovered ECs can be encoded in a forest as the input to the MT decoder and allow the decoder to pick the best MT output based on various features in addition to the sparse features we proposed in this work. Many promising approaches can be explored in the future. Acknowledgments We would like to acknowledge the support of DARPA under Grant HR0011-12-C-0015 for funding part of this work. The views, opinions, and/or findings contained in this article/presentation are those of the author/presenter and should not be interpreted as representing the official views or policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the Department of Defense. References Ann Bies and Mohamed Maamouri. 2003. Penn Arabic treebank guidelines. In http://www.ircs.upenn.edu/arabic/Jan03release/ guidelines-TB-1-28-03.pdf. Shu Cai, David Chiang, and Yoav Goldberg. 2011. Language-independent parsing with empty elements. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics:short papers, pages 212–216. Stanley F. Chen and Joshua Goodman. 1998. An empirical study of smoothing techniques for language modeling. In Technical Report TR-10-98, Computer Science Group, Harvard University. David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, pages 263–270, Ann Arbor, Michigan, June. Tagyoung Chung and Daniel Gildea. 2010. Effects of empty categories on machine translation. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 636– 645. Michael Collins, Philipp Koehn, and Ivona Kuˇcerov´a. 2005. Clause restructuring for statistical machine translation. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, pages 531–540. Peter Dienes and Amit Dubey. 2003. Deep syntactic processing by combining shallow methods. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics. Ryan Gabbard, Seth Kulick, and Mitchell Marcus. 2006. Fully parsing the penn treebank. In Proceedings of the Human Language Technology Conference of the North American Chapter of the ACL. Yuqing Guo, Haifeng Wang, and Josef van Genabith. 2007. Recovering non-local dependencies for Chinese. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL). Mark Hopkins and Jonathan May. 2011. Tuning as ranking. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1352–1362. Mark Johnson. 2002. A simple pattern-matching algorithm for recovering empty nodes and their antecedents. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. Philipp Koehn, Franz Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of HLT-NAACL, pages 48–54. Yang Liu, Qun Liu, and Shouxun Lin. 2006. Treeto-string alignment template for statistical machine translation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 609–616. Xiaoqiang Luo and Bing Zhao. 2011. A statistical tree annotator and its applications. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 1230–1238. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. In Computational Linguistics, volume 19(2), pages 313–330. Franz Josef Och and Hermann Ney. 2000. Improved statistical alignment models. In Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics, pages 440–447, Hong Kong, China, October. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, PA. Adwait Ratnaparkhi. 1997. A linear observed time statistical parser based on maximum entropy models. In Proceedings of Second Conference on Empirical 830 Methods in Natural Language Processing, pages 1– 10. Nianwen Xue, Fei Xia, Fu dong Chiou, and Martha Palmer. 2005. The Penn Chinese Treebank: Phrase structure annotation of a large corpus. In Natural Language Engineering, volume 11(2), pages 207– 238. Yaqin Yang and Nianwen Xue. 2010. Chasing the ghost: Recovering empty categories in the Chinese Treebank. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 1382–1390, Beijing, China, August. Bing Zhao and Yaser Al-onaizan. 2008. Generalizing local and non-local word-reordering patterns for syntax-based machine translation. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 572–581. Shanheng Zhao and Hwee Tou Ng. 2007. Identification and resolution of Chinese zero pronouns: A machine learning approach. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL). 831
2013
81
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 832–840, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics A Multi-Domain Translation Model Framework for Statistical Machine Translation Rico Sennrich Institute of Computational Linguistics University of Zurich Binzm¨uhlestr. 14 CH-8050 Z¨urich [email protected] Holger Schwenk and Walid Aransa LIUM, University of Le Mans 72085 Le Mans cedex 9, France [email protected] Abstract While domain adaptation techniques for SMT have proven to be effective at improving translation quality, their practicality for a multi-domain environment is often limited because of the computational and human costs of developing and maintaining multiple systems adapted to different domains. We present an architecture that delays the computation of translation model features until decoding, allowing for the application of mixture-modeling techniques at decoding time. We also describe a method for unsupervised adaptation with development and test data from multiple domains. Experimental results on two language pairs demonstrate the effectiveness of both our translation model architecture and automatic clustering, with gains of up to 1 BLEU over unadapted systems and single-domain adaptation. 1 Introduction The effectiveness of domain adaptation approaches such as mixture-modeling (Foster and Kuhn, 2007) has been established, and has led to research on a wide array of adaptation techniques in SMT, for instance (Matsoukas et al., 2009; Shah et al., 2012). In all these approaches, adaptation is performed during model training, with respect to a representative development corpus, and the models are kept unchanged when then system is deployed. Therefore, when working with multiple and/or unlabelled domains, domain adaptation is often impractical for a number of reasons. Firstly, maintaining multiple systems for each language pair, each adapted to a different domain, is costly in terms of computational and human resources: the full system development pipeline needs to be performed for all identified domains, all the models are separately stored and need to be switched at runtime. This is impractical in many real applications, in particular a web translation service which is faced with texts coming from many different domains. Secondly, domain adaptation bears a risk of performance loss. If there is a mismatch between the domain of the development set and the test set, domain adaptation can potentially harm performance compared to an unadapted baseline. We introduce a translation model architecture that delays the computation of features to the decoding phase. The calculation is based on a vector of component models, with each component providing the sufficient statistics necessary for the computation of the features. With this framework, adaptation to a new domain simply consists of updating a weight vector, and multiple domains can be supported by the same system. We also present a clustering approach for unsupervised adaptation in a multi-domain environment. In the development phase, a set of development data is clustered, and the models are adapted to each cluster. For each sentence that is being decoded, we choose the weight vector that is optimized on the closest cluster, allowing for adaptation even with unlabelled and heterogeneous test data. 2 Related Work (Ortiz-Mart´ınez et al., 2010) delay the computation of translation model features for the purpose of interactive machine translation with online training. The main difference to our approach is that we store sufficient statistics not for a single model, but a vector of models, which allows us to 832 weight the contribution of each component model to the feature calculation. The similarity suggests that our framework could also be used for interactive learning, with the ability to learn a model incrementally from user feedback, and weight it differently than the static models, opening new research opportunities. (Sennrich, 2012b) perform instance weighting of translation models, based on the sufficient statistics. Our framework implements this idea, with the main difference that the actual combination is delayed until decoding, to support adaptation to multiple domains in a single system. (Razmara et al., 2012) describe an ensemble decoding framework which combines several translation models in the decoding step. Our work is similar to theirs in that the combination is done at runtime, but we also delay the computation of translation model probabilities, and thus have access to richer sufficient statistics. In principle, our architecture can support all mixture operations that (Razmara et al., 2012) describe, plus additional ones such as forms of instance weighting, which are not possible after the translation probabilities have been computed. (Banerjee et al., 2010) focus on the problem of domain identification in a multi-domain setting. They use separate translation systems for each domain, and a supervised setting, whereas we aim for a system that integrates support for multiple domains, with or without supervision. (Yamamoto and Sumita, 2007) propose unsupervised clustering at both training and decoding time. The training text is divided into a number of clusters, a model is trained on each, and during decoding, each sentence is assigned to the closest cluster-specific model. Our approach bears resemblance to this clustering, but is different in that Yamamoto and Sumita assign each sentence to the closest model, and use this model for decoding, whereas in our approach, each cluster is associated with a mixture of models that is optimized to the cluster, and the number of clusters need not be equal to the number of component models. 3 Translation Model Architecture This section covers the architecture of the multidomain translation model framework. Our translation model is embedded in a log-linear model as is common for SMT, and treated as a single translation model in this log-linear combination. We implemented this architecture for phrase-based models, and will use this terminology to describe it, but in principle, it can be extended to hierarchical or syntactic models. The architecture has two goals: move the calculation of translation model features to the decoding phase, and allow for multiple knowledge sources (e.g. bitexts or user-provided data) to contribute to their calculation. Our immediate purpose for this paper is domain adaptation in a multi-domain environment, but the delay of the feature computation has other potential applications, e.g. in interactive MT. We are concerned with calculating four features during decoding, henceforth just referred to as the translation model features: p(s|t), lex(s|t), p(t|s) and lex(t|s). s and t denote the source and target phrase. We follow the definitions in (Koehn et al., 2003). Traditionally, the phrase translation probabilities p(s|t) and p(t|s) are estimated through unsmoothed maximum likelihood estimation (MLE). p(x|y) = c(x, y) c(y) = c(x, y) P x′ c(x′, y) (1) where c denotes the count of an observation, and p the model probability. The lexical weights lex(s|t) and lex(t|s) are calculated as follows, using a set of word alignments a between s and t:1 lex(s|t, a) = n Y i=1 1 |{j|(i, j) ∈a}| X ∀(i,j)∈a w(si|tj) (2) A special NULL token is added to t and aligned to each unaligned word in s. w(si|tj) is calculated through MLE, as in equation 1, but based on the word (pair) frequencies. To combine statistics from a vector of n component corpora, we can use a weighted version of equation 1, which adds a weight vector λ of length n (Sennrich, 2012b): p(x|y; λ) = Pn i=1 λici(x, y) Pn i=1 P x′ λici(x′, y) (3) The word translation probabilities w(ti|sj) are defined analogously, and used in equation 2 for a weighted version. 1The equation shows lex(s|t); lex(t|s) is computed analogously. 833 In order to compute the translation model features online, a number of sufficient statistics need to be accessible at decoding time. For p(s|t) and p(t|s), we require the statistics c(s), c(t) and c(s, t). For accessing them during decoding, we simply store them in the decoder’s data structure, rather than storing pre-computed translation model features. This means that we can use existing, compact data formats for storing and accessing them.2 The statistics are accessed when the decoder collects all translation options for a phrase s in the source sentence. We then access all translation options for each component table, obtaining a vector of statistics c(s) for the source phrase, and c(t) and c(s, t) for each potential target phrase. For phrase pairs which are not found, c(s, t) and c(t) are initially set to 0. Note that c(t) is potentially incorrect at this point, since a phrase pair not being found does not entail that c(t) is 0. After all tables have been accessed, and we thus know the full set of possible translation options (s, t), we perform a second round of lookups for all c(t) in the vector which are still set to 0. We introduce a second table for accessing c(t) efficiently, again storing it in the decoder’s data structure. We can easily create such a table by inverting the source and target phrases, deduplicating it for compactness (we only need one entry per target phrase), and storing c(t) as only feature. For lex(s|t), we require an alignment a, plus c(tj) and c(si, tj) for all pairs (i, j) in a. lex(t|s) can be based on the same alignment a (with the exception of NULL alignments, which can be added online), but uses statistics c(sj) and c(ti, sj). For estimating the lexical probabilities, we load the frequencies into a vector of four hash tables.3 Both space and time complexity of the lookup is linear to the number of component tables. We deem it is still practical because the collection of translation options is typically only a small fraction of total decoding time, with search making up the largest part. For storing and accessing the sufficient statistics (except for the word (pair) frequencies), we use an on-disk data structure pro2We have released an implementation of the architecture as part of the Moses decoder. 3c(s, t) and c(t, s) are not identical since the lexical probabilities are based on the unsymmetrized word alignment frequencies (in the Moses implementation which we reimplement). phrase (pair) c1(x) c2(x) row 300 80 (row, Zeile) 240 20 (row, Reihe) 60 60 λ p(Zeile|row) p(Reihe|row) (1, 1) 0.68 0.32 (1, 10) 0.40 0.60 (10, 1) 0.79 0.21 Table 1: Illustration of instance weighting with weight vectors for two corpora. vided by Moses, which reduces the memory requirements. Still, the number of components may need to be reduced, for instance through clustering of training data (Sennrich, 2012a). With a small modification, our framework could be changed to use a single table that stores a vector of n statistics instead of a vector of n tables. While this would be more compact in terms of memory, and keep the number of table lookups independent of the number of components, we chose a vector of n tables for its flexibility. With a vector of tables, tables can be quickly added to or removed from the system (conceivable even at runtime), and can be polymorph. One applications where this could be desirable is interactive machine translation, where one could work with a mix of compact, static tables, and tables designed to be incrementally trainable. In the unweighted variant, the resulting features are equivalent to training on the concatenation of all training data, excepting differences in word alignment, pruning4 and rounding. The architecture can thus be used as a drop-in replacement for a baseline system that is trained on concatenated training data, with non-uniform weights only being used for texts for which better weights have been established. This can be done either using domain labels or unsupervised methods as described in the next section. As a weighted combination method, we implemented instance weighting as described in equation 3. Table 1 shows the effect of weighting two corpora on the probability estimates for the translation of row. German Zeile (row in a table) is predominant in a bitext from the domain IT, whereas 4We prune the tables to the most frequent 50 phrase pairs per source phrase before combining them, since calculating the features for all phrase pairs of very common source phrases causes a significant slow-down. We found that this had no significant effects on BLEU. 834 0 1 2 3 4 5 6 0 1 2 3 4 5 6 entropy with KDE LM (IT) entropy with Acquis LM (LEGAL) gold clusters 0 1 2 3 4 5 6 0 1 2 3 4 5 6 entropy with KDE LM (IT) entropy with Acquis LM (LEGAL) clustering with Euclidean distance 0 1 2 3 4 5 6 0 1 2 3 4 5 6 entropy with KDE LM (IT) entropy with Acquis LM (LEGAL) clustering with cosine similarity Figure 1: Clustering of data set which contains sentences from two domains: LEGAL and IT. Comparison between gold segmentation, and clustering with two alternative distance/similarity measures. Black: IT; grey: LEGAL. Reihe (line of objects) occurs more often in a legal corpus. Note that the larger corpus (or more precisely, the one in which row occurs more often) has a stronger impact on the probability distribution with uniform weights (or in a concatenation of data sets). Instance weighting allows us to modify the contribution of each corpus. In our implementation, the weight vector is set globally, but can be overridden on a per-sentence basis. In principle, using different weight vectors for different phrase pairs in a sentence is conceivable. The framework can also be extended to support other combination methods, such as a linear interpolation of models. 4 Unsupervised Clustering for Online Translation Model Adaptation The framework supports decoding each sentence with a separate weight vector of size 4n, 4 being the number of translation model features whose computation can be weighted, and n the number of model components. We now address the question of how to automatically select good weights in a multi-domain task. As a way of optimizing instance weights, (Sennrich, 2012b) minimize translation model perplexity on a set of phrase pairs, automatically extracted from a parallel development set. We follow this technique, but want to have multiple weight vectors, adapted to different texts, between which the system switches at decoding time. The goal is to perform domain adaptation without requiring domain labels or user input, neither for development nor decoding. The basic idea consists of three steps: 1. Cluster a development set into k clusters. 2. Optimize translation model weights for each cluster. 3. For each sentence in the test set, assign it to the nearest cluster and use the translation model weights associated with the cluster. For step 2, we use the algorithm by (Sennrich, 2012b), implemented in the decoder to allow for a quick optimization of a running system. We will here discuss steps 1 and 3 in more detail. 4.1 Clustering the Development Set We use k-means clustering to cluster the sentences of the development set. We train a language model on the source language side of each of the n component bitexts, and compute an n-dimensional vector for each sentence by computing its entropy with each language model. Our aim is not to discriminate between sentences that are more likely and unlikely in general, but to cluster on the basis of relative differences between the language model entropies. For this purpose, we choose the cosine as our similarity measure. Figure 1 illustrates clustering in a two-dimensional vector space, and demonstrates that Euclidean distance is unsuitable because it may perform a clustering that is irrelevant to our purposes. As a result of development set clustering, we obtain a bitext for each cluster, which we use to optimize the model weights, and a centroid per cluster. At decoding time, we need only perform an assignment step. Each test set sentence is assigned to the centroid that is closest to it in the vector space. 4.2 Scalability Considerations Our theoretical expectation is that domain adaptation will fail to perform well if the test data is from 835 a different domain than the development data, or if the development data is a heterogeneous mix of domains. A multi-domain setup can mitigate this risk, but only if the relevant domain is represented in the development data, and if the development data is adequately segmented for the optimization. We thus suggest that the development data should contain enough data from all domains that one wants to adapt to, and a high number of clusters. While the resource requirements increase with the number of component models, increasing the number of clusters is computationally cheap at runtime. Only the clustering of the development set and optimization of the translation model weights for each clusters is affected by k. This means that the approach can in principle be scaled to a high number of clusters, and support a high number of domains.5 The biggest risk of increasing the number of clusters is that if the clusters become too small, perplexity minimization may overfit these small clusters. We will experiment with different numbers of clusters, but since we expect the optimal number of clusters to depend on the amount of development data, and the number of domains, we cannot make generalized statements about the ideal number of k. While it is not the focus of this paper, we also evaluate language model adaptation. We perform a linear interpolation of models for each cluster, with interpolation coefficients optimized using perplexity minimization on the development set. The cost of moving language model interpolation into the decoding phase is far greater than for translation models, since the number of hypotheses that need to be evaluated by the language model is several orders of magnitudes higher than the number of phrase pairs used during the translation. For the experiments with language model adaptation, we have chosen to perform linear interpolation offline, and perform language model switching during decoding. While model switching is a fast operation, it also makes the space complexity of storing the language models linear to the number of clusters. For scaling the approach to a high number of clusters, we envision that multi5If the development set is labelled, one can also use a gold segmentation of development sets instead of k-means clustering. At decoding time, cluster assignment can be performed by automatically assigning each sentence to the closest centroid, or again through gold labels, if available. data set sentences words (de) kde 216 000 1 990 000 kdedoc 2880 41 000 kdegb 51 300 450 000 oo 41 000 434 000 oo3 56 800 432 000 php 38 500 301 000 tm 146 000 2 740 000 acquis 2 660 000 58 900 000 dgt 372 000 8 770 000 ecb 110 000 2 850 000 ep7 1 920 000 50 500 000 nc7 159 000 3 950 000 total (train) 5 780 000 131 000 000 dev (IT) 3500 47 000 dev (LEGAL) 2000 46 800 test (IT) 5520 51 800 test (LEGAL) 9780 250 000 Table 2: Parallel data sets English–German. data set sentences words (en) eu 1 270 000 25 600 000 fiction 830 000 13 700 000 navajo 30 000 490 000 news 110 000 2 550 000 paraweb 370 000 3 930 000 subtitles 2 840 000 21 200 000 techdoc 970 000 7 270 000 total (train) 6 420 000 74 700 000 dev 3500 50 700 test 3500 49 600 Table 3: Parallel data sets Czech–English. pass decoding, with an unadapted language model in the first phase, and rescoring with a language model adapted online, could perform adequately, and keep the complexity independent of the number of clusters. 5 Evaluation 5.1 Data and Methods We conduct all experiments with Moses (Koehn et al., 2007), SRILM (Stolcke, 2002), and GIZA++ (Och and Ney, 2003). Log-linear weights are optimized using MERT (Och and Ney, 2003). We keep the word alignment and lexical reordering models constant through the experiments to minimize the number of confounding factors. We report translation quality using BLEU (Papineni et 836 system TM adaptation LM adaptation TM+LM adaptation IT LEGAL IT LEGAL IT LEGAL baseline 21.1 49.9 21.1 49.9 21.1 49.9 1 cluster (no split) 21.3* 49.9 21.8* 49.7 21.8* 49.8 2 clusters 21.6* 49.9 22.2* 50.4* 22.8* 50.2* 4 clusters 21.7* 49.9 23.1* 50.2* 22.6* 50.2* 8 clusters 22.1* 49.9 23.1* 50.1* 22.7* 50.3* 16 clusters 21.1 49.9 22.6* 50.3* 21.9* 50.1* gold clusters 21.8* 50.1* 22.4* 50.1* 23.2* 49.9 Table 4: Translation experiments EN–DE. BLEU scores reported. al., 2002). We account for optimizer instability by running 3 independent MERT runs per system, and performing significance testing with MultEval (Clark et al., 2011). Systems significantly better than the baseline with p < 0.01 are marked with (*). We conduct experiments on two data sets. The first is an English–German translation task with two domains, texts related to information technology (IT) and legal documents (LEGAL). We use data sets from both domains, plus out-of-domain corpora, as shown in table 2. 7 data sets come from the domain IT: 6 from OPUS (Tiedemann, 2009) and a translation memory (tm) provided by our industry partner. 3 data sets are from the legal domain: the ECB corpus from OPUS, plus the JRCAcquis (Steinberger et al., 2006) and DGT-TM (Steinberger et al., 2012). 2 data sets are out-ofdomain, made available by the 2012 Workshop on Statistical Machine Translation (Callison-Burch et al., 2012). The development sets are random samples from the respective in-domain bitexts (heldout from training). The test sets have been provided by Translated, our industry partner in the MATECAT project. Our second data set is CzEng 0.9, a Czech– English parallel corpus (Bojar and Zabokrtsk´y, 2009). It contains text from 7 different sources, on which we train separate component models. The size of the corpora is shown in table 3. As development and test sets, we use 500 sentences of held-out data per source. For both data sets, language models are trained on the target side of the bitexts. In all experiments, we keep the number of component models constant: 12 for EN–DE, 7 for CZ–EN. We vary the number of clusters k from 1, which corresponds to adapting the models to the full development set, to 16. The baseline is the concatenation of all trainData set λIT λLEGAL λcluster 1 λcluster 2 kde 1.0 1.0 1.0 1.0 kdedoc 0.64 12.0 86.0 6.4 kdegb 1.6 2.3 1.7 2.7 oo 0.76 1.6 0.73 1.7 oo3 1.8 4.7 2.4 2.7 php 0.79 6.3 0.69 3.5 tm 1.3 1.3 1.5 1.1 acquis 0.024 3.5 0.018 1.9 dgt 0.053 4.5 0.033 2.4 ecb 0.071 2.3 0.039 1.2 ep7 0.037 0.53 0.024 0.29 nc7 0.1 1.1 0.063 0.62 Table 5: Weight vectors for feature p(t|s) optimized on four development sets (from gold split and clustering with k = 2). ing data, with no adaptation performed. We also evaluate the labelled setting, where instead of unsupervised clustering, we use gold labels to split the development and test sets, and adapt the models to each labelled domain. 5.2 Results Table 4 shows results for the EN–DE data set. For our clustering experiments, the development set is the concatenation of the LEGAL and IT development sets. However, we always use the gold segmentation between LEGAL and IT for MERT and testing. This allows for a detailed analysis of the effect of development data clustering for the purpose of model adaptation. In an unlabelled setting, one would have to run MERT either on the full development set (as we will do for the CZ–EN task) or separately on each cluster, or use an alternative approach to optimize log-linear weights in a multidomain setting, such as feature augmentation as described by (Clark et al., 2012). 837 system TM adaptation LM adaptation TM+LM adaptation baseline 34.4 34.4 34.4 1 cluster (no split) 34.5 33.7 34.1 2 clusters 34.6 34.0 34.4 4 clusters 34.7* 34.3 34.6 8 clusters 34.7* 34.5 34.9* 16 clusters 34.7* 34.7* 35.0* gold clusters 35.0* 35.0* 35.4* Table 6: Translation experiments CZ–EN. BLEU scores reported. We find that an adaptation of the TM and LM to the full development set (system “1 cluster”) yields the smallest improvements over the unadapted baseline. The reason for this is that the mixed-domain development set is not representative for the respective test sets. Using multiple adapted systems yields better performance. For the IT test set, the system with gold labels and TM adaptation yields an improvement of 0.7 BLEU (21.1 →21.8), LM adaptation yields 1.3 BLEU (21.1 →22.4), and adapting both models outperforms the baseline by 2.1 BLEU (21.1 →23.2). The systems that use unsupervised clusters reach a similar level of performance than those with gold clusters, with best results being achieved by the systems with 2–8 clusters. Some systems outperform both the baseline and the gold clusters, e.g. TM adaptation with 8 clusters (21.1 →21.8 →22.1), or LM adaptation with 4 or 8 clusters (21.1 →22.4 →23.1). Results with 16 clusters are slightly worse than those with 2–8 clusters due to two effects. Firstly, for the system with adapted TM, one of the three MERT runs is an outlier, and the reported BLEU score of 21.1 is averaged from the three MERT runs achieving 22.1, 21.6, and 19.6 BLEU, respectively. Secondly, about one third of the IT test set is assigned to a cluster that is not IT-specific, which weakens the effect of domain adaptation for the systems with 16 clusters. For the LEGAL subset, gains are smaller. This can be explained by the fact that the majority of training data is already from the legal domain, which makes it unnecessary to boost its impact on the probability distribution even further. Table 5 shows the automatically obtained translation model weight vectors for two systems, “gold clusters” and “2 clusters”, for the feature p(t|s). It illustrates that all the corpora that we consider out-of-domain for IT are penalized by a factor of 10–50 (relative to the in-domain kde corpus) for the computation of this feature. For the LEGAL domain, the weights are more uniform, which is congruent with our observation that BLEU changes little. Table 6 shows results for the CZ–EN data set. For each system, MERT is performed on the full development set. As in the first task, adaptation to the full development set is least effective. The systems with unsupervised clusters significantly outperform the baseline. For the system with 16 clusters, we observe an improvement of 0.3 BLEU for TM adaptation, and 0.6 BLEU for adapting both models (34.4 →34.7 →35.0). The labelled system, i.e. the system with 7 clusters corresponding to the 7 data sources, both for the development and test set, performs best. We observe gains of 0.6 BLEU (34.4 →35.0) for TM or LM adaptation, and 1 BLEU (34.4 →35.4) when both models are adapted. We conclude that the translation model architecture is effective in a multi-domain setting, both with unsupervised clusters and labelled domains. The fact that language model adaptation yields an additional improvement in our experiments suggests that it it would be worthwhile to also investigate a language model data structure that efficiently supports multiple domains. 6 Conclusion We have presented a novel translation model architecture that delays the computation of translation model features to the decoding phase, and uses a vector of component models for this computation. We have also described a usage scenario for this architecture, namely its ability to quickly switch between weight vectors in order to serve as an adapted model for multiple domains. A simple, unsupervised clustering of development data is sufficient to make use of this ability and imple838 ment a multi-domain translation system. If available, one can also use the architecture in a labelled setting. Future work could involve merging our translation model framework with the online adaptation of other models, or the log-linear weights. Our approach is orthogonal to that of (Clark et al., 2012), who perform feature augmentation to obtain multiple sets of adapted log-linear weights. While (Clark et al., 2012) use labelled data, their approach could in principle also be applied after unsupervised clustering. The translation model framework could also serve as the basis of real-time adaptation of translation systems, e.g. by using incremental means to update the weight vector, or having an incrementally trainable component model that learns from the post-edits by the user, and is assigned a suitable weight. Acknowledgments This research was partially funded by the Swiss National Science Foundation under grant 105215 126999, the European Commission (MATECAT, ICT-2011.4.2 287688) and the DARPA BOLT project. References Pratyush Banerjee, Jinhua Du, Baoli Li, Sudip Kumar Naskar, Andy Way, and Josef Van Genabith. 2010. Combining multi-domain statistical machine translation models using automatic classifiers. In 9th Conference of the Association for Machine Translation in the Americas (AMTA 2010), Denver, Colorado, USA. Ondrej Bojar and Zdenek Zabokrtsk´y. 2009. Czeng 0.9: Large parallel treebank with rich annotation. Prague Bull. Math. Linguistics, 92:63–84. Chris Callison-Burch, Philipp Koehn, Christof Monz, Matt Post, Radu Soricut, and Lucia Specia. 2012. Findings of the 2012 workshop on statistical machine translation. In Proceedings of the Seventh Workshop on Statistical Machine Translation, pages 10–51, Montr´eal, Canada, June. Association for Computational Linguistics. Jonathan H. Clark, Chris Dyer, Alon Lavie, and Noah A. Smith. 2011. Better hypothesis testing for statistical machine translation: Controlling for optimizer instability. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 176–181, Portland, Oregon, USA, June. Association for Computational Linguistics. Jonathan H. Clark, Alon Lavie, and Chris Dyer. 2012. One system, many domains: Open-domain statistical machine translation via feature augmentation. In Conference of the Association for Machine Translation in the Americas 2012 (AMTA 2012), San Diego, California, USA. George Foster and Roland Kuhn. 2007. Mixturemodel adaptation for SMT. In Proceedings of the Second Workshop on Statistical Machine Translation, StatMT ’07, pages 128–135, Prague, Czech Republic. Association for Computational Linguistics. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In NAACL ’03: Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, pages 48–54, Edmonton, Canada. Association for Computational Linguistics. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondˇrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open Source Toolkit for Statistical Machine Translation. In ACL 2007, Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions, pages 177–180, Prague, Czech Republic, June. Association for Computational Linguistics. Spyros Matsoukas, Antti-Veikko I. Rosti, and Bing Zhang. 2009. Discriminative corpus weight estimation for machine translation. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 2 - Volume 2, pages 708–717, Singapore. Association for Computational Linguistics. Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19–51. Daniel Ortiz-Mart´ınez, Ismael Garc´ıa-Varea, and Francisco Casacuberta. 2010. Online learning for interactive statistical machine translation. In HLTNAACL, pages 546–554. The Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: A method for automatic evaluation of machine translation. In ACL ’02: Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Majid Razmara, George Foster, Baskaran Sankaran, and Anoop Sarkar. 2012. Mixing multiple translation models in statistical machine translation. In 839 Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, Jeju, Republic of Korea. Association for Computational Linguistics. Rico Sennrich. 2012a. Mixture-modeling with unsupervised clusters for domain adaptation in statistical machine translation. In 16th Annual Conference of the European Association for Machine Translation (EAMT 2012), pages 185–192, Trento, Italy. Rico Sennrich. 2012b. Perplexity minimization for translation model domain adaptation in statistical machine translation. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 539– 549, Avignon, France. Association for Computational Linguistics. Kashif Shah, Loc Barrault, and Holger Schwenk. 2012. A general framework to weight heterogeneous parallel data for model adaptation in statistical machine translation. In Conference of the Association for Machine Translation in the Americas 2012 (AMTA 2012), San Diego, California, USA. Ralf Steinberger, Bruno Pouliquen, Anna Widiger, Camelia Ignat, Tomaz Erjavec, Dan Tufis, and Daniel Varga. 2006. The JRC-Acquis: A multilingual aligned parallel corpus with 20+ languages. In Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC’2006), Genoa, Italy. Ralf Steinberger, Andreas Eisele, Szymon Klocek, Spyridon Pilos, and Patrick Schl¨uter. 2012. DGTTM: A freely available translation memory in 22 languages. In Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC’12), Istanbul, Turkey. European Language Resources Association (ELRA). Andreas Stolcke. 2002. SRILM – An Extensible Language Modeling Toolkit. In Seventh International Conference on Spoken Language Processing, pages 901–904, Denver, CO, USA. J¨org Tiedemann. 2009. News from OPUS - a collection of multilingual parallel corpora with tools and interfaces. In N. Nicolov, K. Bontcheva, G. Angelova, and R. Mitkov, editors, Recent Advances in Natural Language Processing, volume V, pages 237–248. John Benjamins, Amsterdam/Philadelphia, Borovets, Bulgaria. Hirofumi Yamamoto and Eiichiro Sumita. 2007. Bilingual cluster based models for statistical machine translation. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 514–523, Prague, Czech Republic. 840
2013
82
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 841–851, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Part-of-Speech Induction in Dependency Trees for Statistical Machine Translation Akihiro Tamura†,‡, Taro Watanabe†, Eiichiro Sumita†, Hiroya Takamura‡, Manabu Okumura‡ † National Institute of Information and Communications Technology {akihiro.tamura, taro.watanabe, eiichiro.sumita}@nict.go.jp † Precision and Intelligence Laboratory, Tokyo Institute of Technology {takamura, oku}@pi.titech.ac.jp Abstract This paper proposes a nonparametric Bayesian method for inducing Part-ofSpeech (POS) tags in dependency trees to improve the performance of statistical machine translation (SMT). In particular, we extend the monolingual infinite tree model (Finkel et al., 2007) to a bilingual scenario: each hidden state (POS tag) of a source-side dependency tree emits a source word together with its aligned target word, either jointly (joint model), or independently (independent model). Evaluations of Japanese-to-English translation on the NTCIR-9 data show that our induced Japanese POS tags for dependency trees improve the performance of a forestto-string SMT system. Our independent model gains over 1 point in BLEU by resolving the sparseness problem introduced in the joint model. 1 Introduction In recent years, syntax-based SMT has made promising progress by employing either dependency parsing (Lin, 2004; Ding and Palmer, 2005; Quirk et al., 2005; Shen et al., 2008; Mi and Liu, 2010) or constituency parsing (Huang et al., 2006; Liu et al., 2006; Galley et al., 2006; Mi and Huang, 2008; Zhang et al., 2008; Cohn and Blunsom, 2009; Liu et al., 2009; Mi and Liu, 2010; Zhang et al., 2011) on the source side, the target side, or both. However, dependency parsing, which is a popular choice for Japanese, can incorporate only shallow syntactic information, i.e., POS tags, compared with the richer syntactic phrasal categories in constituency parsing. Moreover, existing POS tagsets might not be optimal for SMT because they are constructed without considering the language in the other side. Consider the examples in Figure 1. The Japanese noun “利用” in 私 が 利用 利用 利用 利用料金 を 払う あなたはインターネットが 利用 利用 利用 利用できない You can not use the Internet . I pay usage fees . noun particle particle noun noun verb auxiliary verb noun particle noun noun verb particle [Example 1] [Example 2] Japanese POS: Japanese POS: Figure 1: Examples of Existing Japanese POS Tags and Dependency Structures Example 1 corresponds to the English verb “use”, while that in Example 2 corresponds to the English noun “usage”. Thus, Japanese nouns act like verbs in English in one situation, and nouns in English in another. If we could discriminate POS tags for two cases, we might improve the performance of a Japanese-to-English SMT system. In the face of the above situations, this paper proposes an unsupervised method for inducing POS tags for SMT, and aims to improve the performance of syntax-based SMT by utilizing the induced POS tagset. The proposed method is based on the infinite tree model proposed by Finkel et al. (2007), which is a nonparametric Bayesian method for inducing POS tags from syntactic dependency structures. In this model, hidden states represent POS tags, the observations they generate represent the words themselves, and tree structures represent syntactic dependencies between pairs of POS tags. The proposed method builds on this model by incorporating the aligned words in the other language into the observations. We investigate two types of models: (i) a joint model and (ii) an independent model. In the joint model, each hidden state jointly emits both a source word and its aligned target word as an observation. The independent model separately emits words in two languages from hidden states. By inferring POS 841 tags based on bilingual observations, both models can induce POS tags by incorporating information from the other language. Consider, for example, inducing a POS tag for the Japanese word “ 利用” in Figure 1. Under a monolingual induction method (e.g., the infinite tree model), the “利用” in Example 1 and 2 would both be assigned the same POS tag since they share the same observation. However, our models would assign separate tags for the two different instances since the “利 用” in Example 1 and Example 2 could be disambiguated by encoding the target-side information, either “use” or “usage”, in the observations. Inference is efficiently carried out by beam sampling (Gael et al., 2008), which combines slice sampling and dynamic programming. Experiments are carried out on the NTCIR-9 Japaneseto-English task using a binarized forest-to-string SMT system with dependency trees as its source side. Our bilingually-induced tagset significantly outperforms the original tagset and the monolingually-induced tagset. Further, our independent model achieves a more than 1 point gain in BLEU, which resolves the sparseness problem introduced by the bi-word observations. 2 Related Work A number of unsupervised methods have been proposed for inducing POS tags. Early methods have the problem that the number of possible POS tags must be provided preliminarily. This limitation has been overcome by automatically adjusting the number of possible POS tags using nonparametric Bayesian methods (Finkel et al., 2007; Gael et al., 2009; Blunsom and Cohn, 2011; Sirts and Alum¨ae, 2012). Gael et al. (2009) applied infinite HMM (iHMM) (Beal et al., 2001; Teh et al., 2006), a nonparametric version of HMM, to POS induction. Blunsom and Cohn (2011) used a hierarchical Pitman-Yor process prior to the transition and emission distribution for sophisticated smoothing. Sirts and Alum¨ae (2012) built a model that combines POS induction and morphological segmentation into a single learning problem. Finkel et al. (2007) proposed the infinite tree model, which represents recursive branching structures over infinite hidden states and induces POS tags from syntactic dependency structures. In the following, we overview the infinite tree model, which is the basis of our proposed model. In particular, we will describe the independent children H φk π π π πk ρ z1 z2 z3 x1 x2 x3 k=1,…,C H k k ~ ) ,..., ( Dirichlet ~ | φ ρ ρ ρ π Figure 2: A Graphical Representation of the Finite Tree Model model (Finkel et al., 2007), where children are dependent only on their parents, used in our proposed model1. 2.1 Finite Tree Model We first review the finite tree model, which can be graphically represented in Figure 2. Let Tt denote the tree whose root node is t. A node t has a hidden state zt (the POS tag) and an observation xt (the word). The probability of a tree Tt, pT (Tt), is recursively defined: pT (Tt) = p(xt|zt) ∏ t′∈c(t) p(zt′|zt)pT (Tt′), where c(t) is the set of the children of t. Let each hidden state variable have C possible values indexed by k. For each state k, there is a parameter ϕk which parameterizes the observation distribution for that state: xt|zt ∼F(ϕzt). ϕk is distributed according to a prior distribution H: ϕk ∼H. Transitions between states are governed by Markov dynamics parameterized by π, where πij = p(zc(t) = j|zt = i) and πk are the transition probabilities from the parent’s state k. πk is distributed according to a Dirichlet distribution with parameter ρ: πk|ρ ∼Dirichlet(ρ, . . . , ρ). The hidden state of each child zt′ is distributed according to a multinomial distribution πzt specific to the parent’s state zt: zt′|zt ∼Multinomial(πzt). 2.2 Infinite Tree Model In the infinite tree model, the number of possible hidden states is potentially infinite. The infinite model is formed by extending the finite tree model using a hierarchical Dirichlet process (HDP) (Teh et al., 2006). The reason for using an HDP rather 1Finkel et al. (2007) originally proposed three types of models: besides the independent children model, the simultaneous children model and the markov children model. Although we could apply the other two models, we leave this for future work. 842 H φk π π π πk α0 z1 z2 z3 x1 x2 x3 ∞ γ β β β β H k k ~ ) , ( DP ~ , | ) ( GEM ~ | 0 0 φ β α β α π γ γ β Figure 3: A Graphical Representation of the Infinite Tree Model than a simple Dirichlet process (DP)2 (Ferguson, 1973) is that we have to introduce coupling across transitions from different parent’s states. A similar measure was adopted in iHMM (Beal et al., 2001). HDP is a set of DPs coupled through a shared random base measure which is itself drawn from a DP: each Gk ∼DP(α0, G0) with a shared base measure G0, and G0 ∼DP(γ, H) with a global base measure H. From the viewpoint of the stickbreaking construction3 (Sethuraman, 1994), the HDP is interpreted as follows: G0 = ∞ ∑ k′=1 βk′δϕk′ and Gk = ∞ ∑ k′=1 πkk′δϕk′, where β ∼GEM(γ), πk ∼DP(α0, β), and ϕk′ ∼H. We regard each Gk as two coindexed distributions: πk, a distribution over the transition probabilities from the parent’s state k, and ϕk′, an observation distribution for the state k′. Then, the infinite tree model is formally defined as follows: β|γ ∼GEM(γ), πk|α0, β ∼DP(α0, β), ϕk ∼H, zt′|zt ∼Multinomial(πzt), xt|zt ∼F(ϕzt). Figure 3 shows the graphical representation of the infinite tree model. The primary difference be2DP is a measure on measures. It has two parameters, a scaling parameter α and a base measure H: DP(α, H). 3Sethuraman (1994) showed a definition of a measure G ∼DP(α0, G0). First, infinite sequences of i.i.d variables (π′ k)∞ k=1 and (ϕk)∞ k=1 are generated: π′ k|α0 ∼Beta(1, α0), ϕk ∼G0. Then, G is defined as: πk = π′ k ∑k−1 l=1 (1 −π′ l), G = ∑∞ k=1 πkδϕk. If π is defined by this process, then we write π ∼GEM(α0). H φk π π π πk α0 ∞ γ β β β β z1 z2 z3 z4 z5 z6 “払う +pay” “を” “料金 +fees” “利用 +usage” “私+I” “が” Figure 4: An Example of the Joint Model tween Figure 2 and Figure 3 is whether the number of copies of the state is finite or not. 3 Bilingual Infinite Tree Model We propose a bilingual variant of the infinite tree model, the bilingual infinite tree model, which utilizes information from the other language. Specifically, the proposed model introduces bilingual observations by embedding the aligned target words in the source-side dependency trees. This paper proposes two types of models that differ in their processes for generating observations: the joint model and the independent model. 3.1 Joint Model The joint model is a simple application of the infinite tree model under a bilingual scenario. The model is formally defined in the same way as in Section 2.2 and is graphically represented similarly to Figure 3. The only difference from the infinite tree model is the instances of observations (xt). Observations in the joint model are the combination of source words and their aligned target words4, while observations in the monolingual infinite tree model represent only source words. For each source word, all the aligned target words are copied and sorted in alphabetical order, and then concatenated into a single observation. Therefore, a single target word may be emitted multiple times if the target word is aligned with multiple source words. Likewise, there may be target words which may not be emitted by our model, if the target words are not aligned. Figure 4 shows the process of generating Example 2 in Figure 1 through the joint model, where aligned words are jointly emitted as observations. In Figure 4, the POS tag of “利用” (z5) generates 4When no target words are aligned, we simply add a NULL target word. 843 H φk π π π πk α0 γ β β β β z1 z2 z3 z4 z5 H’ φ'k 払う pay I 私 を 利用 料金 が NONE NONE usage fees z6 ∞ ' ~ ' , ~ ) , ( DP ~ , | ) ( GEM ~ | 0 0 H H k k k φ φ β α β α π γ γ β Figure 5: A Graphical Representation of the Independent Model the string “利用+usage” as the observation (x5). Similarly, the POS tag of “利用” in Example 1 would generate the string “利用+use”. Hence, this model can assign different POS tags to the two different instances of the word “利用”, based on the different observation distributions in inference. 3.2 Independent Model The joint model is prone to a data sparseness problem, since each observation is a combination of a source word and its aligned target word. Thus, we propose an independent model, where each hidden state generates a source word and its aligned target word separately. For the aligned target side, we introduce an observation variable x′ t for each zt and a parameter ϕ′ k for each state k, which parameterizes a distinct distribution over the observations x′ t for that state. ϕ′ k is distributed according to a prior distribution H′. Specifically, the independent model is formally defined as follows: β|γ ∼GEM(γ), πk|α0, β ∼DP(α0, β), ϕk ∼H, ϕ′ k ∼H′, zt′|zt ∼Multinomial(πzt), xt|zt ∼F(ϕzt), x′ t|zt ∼F ′(ϕ′ zt). When multiple target words are aligned to a single source word, each aligned word is generated separately from observation distribution parameterized by ϕ′ k. Figure 5 graphs the process of generating Example 2 in Figure 1 using the independent model. x′ t and ϕ′ k are introduced for aligned target words. The state of “利用” (z5) generates the Japanese word “利用” as x5 and the English word “usage” as x′ 5. Due to this factorization, the independent model is less subject to the sparseness problem. 3.3 Introduction of Other Factors We assumed the surface form of aligned target words as additional observations in previous sections. Here, we introduce additional factors, i.e., the POS of aligned target words, in the observations. Note that POSs of target words are assigned by a POS tagger in the target language and are not inferred in the proposed model. First, we can simply replace surface forms of target words with their POSs to overcome the sparseness problem. Second, we can incorporate both information from the target language as observations. In the joint model, two pieces of information are concatenated into a single observation. In the independent model, we introduce observation variables (e.g., x′ t and x′′ t ) and parameters (e.g., ϕ′ k and ϕ′′ k) for each information. Specifically, x′ t and ϕ′ k are introduced for the surface form of aligned words, and x′′ t and ϕ′′ k for the POS of aligned words. Consider, for example, Example 1 in Figure 1. The POS tag of “利用” generates the string “利用+use+verb” as the observation in the joint model, while it generates “利用”, “use”, and “verb” independently in the independent model. 3.4 POS Refinement We have assumed a completely unsupervised way of inducing POS tags in dependency trees. Another realistic scenario is to refine the existing POS tags (Finkel et al., 2007; Liang et al., 2007) so that each refined sub-POS tag may reflect the information from the aligned words while preserving the handcrafted distinction from original POS tagset. Major difference is that we introduce separate transition probabilities πs k and observation distributions (ϕs k, ϕ ′s k ) for each existing POS tag s. Then, each node t is constrained to follow the distributions indicated by the initially assigned POS tag st, and we use the pair (st, zt) as a state representation. 3.5 Inference In inference, we find the state set that maximizes the posterior probability of state transitions given observations (i.e., P(z1:n|x1:n)). However, we cannot evaluate the probability for all possible states because the number of states is infinite. Finkel et al. (2007) presented a sampling algorithm for the infinite tree model, which is based on the Gibbs sampling in the direct assignment representation for iHMM (Teh et al., 2006). In the 844 Gibbs sampling, individual hidden state variables are resampled conditioned on all other variables. Unfortunately, its convergence is slow in HMM settings because sequential data is likely to have a strong correlation between hidden states (Gael et al., 2008). We present an inference procedure based on beam sampling (Gael et al., 2008) for the joint model and the independent model. Beam sampling limits the number of possible state transitions for each node to a finite number using slice sampling (Neal, 2003), and then efficiently samples whole hidden state transitions using dynamic programming. Beam sampling does not suffer from slow convergence as in Gibbs sampling by sampling the whole state variables at once. In addition, Gael et al. (2008) showed that beam sampling is more robust to initialization and hyperparameter choice than Gibbs sampling. Specifically, we introduce an auxiliary variable ut for each node in a dependency tree to limit the number of possible transitions. Our procedure alternates between sampling each of the following variables: the auxiliary variables u, the state assignments z, the transition probabilities π, the shared DP parameters β, and the hyperparameters α0 and γ. We can parallelize procedures in sampling u and z because the slice sampling for u and the dynamic programing for z are independent for each sentence. See Gael el al. (2009) for details. The only difference between inferences in the joint model and the independent model is in computing the posterior probability of state transitions given observations (e.g., p(z1:n|x1:n) and p(z1:n|x1:n, x′ 1:n)) in sampling z. In the following, we describe each sampling stage. See Teh et al., (2006) for details of sampling π, β, α0 and γ. Sampling u: Each ut is sampled from the uniform distribution on [0, πzd(t)zt], where d(t) is the parent of t: ut ∼Uniform(0, πzd(t)zt). Note that ut is a positive number, since each transition probability πzd(t)zt is larger than zero. Sampling z: Possible values k of zt are divided into the two sets using ut: a finite set with πzd(t)k > ut and an infinite set with πzd(t)k ≤ut. The beam sampling considers only the former set. Owing to the truncation of the latter set, we can compute the posterior probability of a state zt given observations for all t (t = 1, . . . , T) using dynamic programming as follows: In the joint model, p(zt|xσ(t), uσ(t)) ∝ p(xt|zt) · ∑ zd(t):πzd(t)zt>ut p(zd(t)|xσ(d(t)), uσ(d(t))), and in the independent model, p(zt|xσ(t), x′ σ(t), uσ(t)) ∝p(xt|zt) · p(x′ t|zt) · ∑ zd(t):πzd(t)zt>ut p(zd(t)|xσ(d(t)), x′ σ(d(t)), uσ(d(t))), where xσ(t) (or uσ(t)) denotes the set of xt (or ut) on the path from the root node to the node t in a tree. In our experiments, we assume that F(ϕk) is Multinomial(ϕk) and H is Dirichlet(ρ, . . . , ρ), which is the same in Finkel et al. (2007). Under this assumption, the posterior probability of an observation is as follows: p(xt|zt) = ˙nxtk + ρ ˙n·k + Nρ, where ˙nxk is the number of observations x with state k, ˙n·k is the number of hidden states whose values are k, and N is the total number of observations x. Similarly, p(x′ t|zt) = ˙nx′ tk + ρ′ ˙n·k + N′ρ′ , where N′ is the total number of observations x′. When the posterior probability of a state zt given observations for all t can be computed, we first sample the state of each leaf node and then perform backtrack sampling for every other zt where the zt is sampled given the sample for zc(t) as follows: p(zt|zc(t), x1:T , u1:T ) ∝ p(zt|xσ(t), uσ(t)) ∏ t′∈c(t) p(zt′|zt, ut′). Sampling π: We introduce a count variable nij ∈ n, which is the number of observations with state j whose parent’s state is i. Then, we sample π using the Dirichlet distribution: (πk1, . . . , πkK, ∑∞ k′=K+1 πkk′) ∼ Dirichlet(nk1 + α0β1, . . . , nkK + α0βK, α0 ∑∞ k′=K+1 βk′), where K is the number of distinct states in z. Sampling β: We introduce a set of auxiliary variables m, where mij ∈ m is the number of elements of πj corresponding to βi. The conditional distribution of each variable is p(mij = m|z, β, α0) ∝ S(nij, m)(α0βj)m, where S(n, m) are unsigned Stirling numbers of the first kind5. 5S(0, 0) = S(1, 1) = 1, S(n, 0) = 0 for n > 0, S(n, m) = 0 for m > n, and S(n + 1, m) = S(n, m − 1) + nS(n, m) for others. 845 The parameters β are sampled using the Dirichlet distribution: (β1, . . . , βK, ∑∞ k′=K+1 βk′) ∼ Dirichlet(m·1, . . . , m·K, γ), where m·k = ∑K k′=1 mk′k. Sampling α0: α0 is parameterized by a gamma hyperprior with hyperparameters αa and αb. We introduce two types of auxiliary variables for each state (k = 1, . . . , K), wk ∈[0, 1] and vk ∈{0, 1}. The conditional distribution of each wk is p(wk|α0) ∝wα0 k (1−wk)n·k−1 and that of each vk is p(vk|α0) ∝(n·k α0 ) vk, where n·k = ∑K k′=1 nk′k. The conditional distribution of α0 given wk and vk (k = 1, . . . , K) is p(α0|w, v) ∝ ααa−1+m..−∑K k=1 vk 0 e−α0(αb−∑K k=1 logwk), where m·· = ∑K k′=1 ∑K k′′=1 mk′k′′. Sampling γ: γ is parameterized by a gamma hyperprior with hyperparameters γa and γb. We introduce an auxiliary variable η, whose conditional distribution is p(η|γ) ∝ηγ(1 −η)m··−1. The conditional distribution of γ given η is p(γ|η) ∝ γγa−1+Ke−γ(γb−logη). 4 Experiment We tested our proposed models under the NTCIR-9 Japanese-to-English patent translation task (Goto et al., 2011), consisting of approximately 3.2 million bilingual sentences. Both the development data and the test data consist of 2,000 sentences. We also used the NTCIR-7 development data consisting of 2,741 sentences for development testing purposes. 4.1 Experimental Setup We evaluated our bilingual infinite tree model for POS induction using an in-house developed syntax-based forest-to-string SMT system. In the training process, the following steps are performed sequentially: preprocessing, inducing a POS tagset for a source language, training a POS tagger and a dependency parser, and training a forest-to-string MT model. Step 1. Preprocessing We used the first 10,000 Japanese-English sentence pairs in the NTCIR-9 training data for inducing a POS tagset for Japanese6. The Japanese sentences were segmented using MeCab7, and the English sentences were tokenized and POS tagged using TreeTagger (Schmid, 1994), where 43 and 58 types of POS tags are included in the Japanese sentences and the English sentences, respectively. The Japanese POS tags come from the secondlevel POS tags in the IPA POS tagset (Asahara and Matsumoto, 2003) and the English POS tags are derived from the Penn Treebank. Note that the Japanese POS tags are used for initialization of hidden states and the English POS tags are used as observations emitted by hidden states. Word-by-word alignments for the sentence pairs are produced by first running GIZA++ (Och and Ney, 2003) in both directions and then combining the alignments using the “grow-diag-finaland” heuristic (Koehn et al., 2003). Note that we ran GIZA++ on all of the NTCIR-9 training data in order to obtain better alignements. The Japanese sentences are parsed using CaboCha (Kudo and Matsumoto, 2002), which generates dependency structures using a phrasal unit called a bunsetsu8, rather than a word unit as in English or Chinese dependency parsing. Since we focus on the word-level POS induction, each bunsetsu-based dependency tree is converted into its corresponding word-based dependency tree using the following heuristic9: first, the last function word inside each bunsetsu is identified as the head word10; then, the remaining words are treated as dependents of the head word in the same bunsetsu; finally, a bunsetsu-based dependency structure is transformed to a word-based dependency structure by preserving the head/modifier relationships of the determined head words. Step 2. POS Induction A POS tag for each word in the Japanese sentences is inferred by our bilingual infinite tree model, ei6Due to the high computational cost, we did not use all the NTCIR-9 training data. We leave scaling up to a larger dataset for future work. 7http://mecab.googlecode.com/svn/ trunk/mecab/doc/index.html 8A bunsetsu is the smallest meaningful sequence consisting of a content word and accompanying function words (e.g., a noun and a particle). 9We could use other word-based dependency trees such as trees by the infinite PCFG model (Liang et al., 2007) and syntactic-head or semantic-head dependency trees in Nakazawa and Kurohashi (2012), although it is not our major focus. We leave this for future work. 10If no function words exist in a bunsetsu, the last content word is treated as the head word. 846 ther jointly (Joint) or independently (Ind). We also performed monolingual induction of Finkel et al. (2007) for comparison (Mono). In each model, a sequence of sampling u, z, π, β, α0, and γ is repeated 10,000 times. In sampling α0 and γ, hyperparameters αa, αb, γa, and γb are set to 2, 1, 1, and 1, respectively, which is the same setting in Gael et al. (2008). In sampling z, parameters ρ, ρ′, . . ., are set to 0.01. In the experiments, three types of factors for the aligned English words are compared: surface forms (‘s’), POS tags (‘P’), and the combination of both (‘s+P’). Further, two types of inference frameworks are compared: induction (IND) and refinement (REF). In both frameworks, each hidden state zt is first initialized to the POS tags assigned by MeCab (the IPA POS tagset), and then each state is updated through the inference procedure described in Section 3.5. Note that in REF, the sampling distribution over zt is constrained to include only states that are a refinement of the initially assigned POS tag. Step 3. Training a POS Tagger and a Dependency Parser In this step, we train a Japanese dependency parser from the 10,000 Japanese dependency trees with the induced POS tags which are derived from Step 2. We employed a transition-based dependency parser which can jointly learn POS tagging and dependency parsing (Hatori et al., 2011) under an incremental framework11. Note that the learned parser can identify dependencies between words and attach an induced POS tag for each word. Step 4. Training a Forest-to-String MT In this step, we train a forest-to-string MT model based on the learned dependency parser in Step 3. We use an in-house developed hypergraph-based toolkit, cicada, for training and decoding with a tree-to-string model, which has been successfully employed in our previous work for system combination (Watanabe and Sumita, 2011) and online learning (Watanabe, 2012). All the Japanese and English sentences in the NTCIR-9 training data are segmented in the same way as in Step 1, and then each Japanese sentence is parsed by the dependency parser learned in Step 3, which simultaneously assigns induced POS tags and word dependencies. Finally, a forest-to-string MT model is learned with Zhang et al., (2011), which extracts translation rules by a forest-based variant of 11http://triplet.cc/software/corbit/ IND REF BS 27.54 Mono 27.66 26.83 Joint[s] 28.00 28.00 Joint[P] 26.36 26.72 Joint[s+P] 27.99 27.82 Ind[s] 28.00 27.93 Ind[P] 28.11 28.63 Ind[s+P] 28.13 28.62 Table 1: Performance on Japanese-to-English Translation Measured by BLEU (%) the GHKM algorithm (Mi and Huang, 2008) after each parse tree is restructured into a binarized packed forest. Parameters are tuned on the development data using xBLEU (Rosti et al., 2011) as an objective and L-BFGS (Liu and Nocedal, 1989) as an optimization toolkit, since it is stable and less prone to randomness, unlike MERT (Och, 2003) or PRO (Hopkins and May, 2011). The development test data is used to set up hyperparameters, i.e., to terminate tuning iterations. When translating Japanese sentences, a parse tree for each sentence is constructed in the same way as described earlier in this step, and then the parse trees are translated into English sentences using the learned forest-to-string MT model. 4.2 Experimental Results Table 1 shows the performance for the test data measured by case sensitive BLEU (Papineni et al., 2002). We also present the performance of our baseline forest-to-string MT system (BS) using the original IPA POS tags. In Table 1, numbers in bold indicate that the systems outperform the baselines, BS and Mono. Under the Moses phrase-based SMT system (Koehn et al., 2007) with the default settings, we achieved a 26.80% BLEU score. Table 1 shows that the proposed systems outperform the baseline Mono. The differences between the performance of Ind[s+P] and Mono are statistically significant in the bootstrap method (Koehn, 2004), with a 1% significance level both in IND and REF. The results indicate that integrating the aligned target-side information in POS induction makes inferred tagsets more suitable for SMT. Table 1 also shows that the independent model is more effective for SMT than the joint model. This means that sparseness is a severe problem in 847 Model IND REF Joint[s+P] 164 620 Ind[s+P] 102 517 IPA POS tags 42 Table 2: The Number of POS Tags POS induction when jointly encoding bilingual information into observations. Additionally, all the systems using the independent model outperform BS. The improvements are statistically significant in the bootstrap method (Koehn, 2004), with a 1% significance level. The results show that the proposed models can generate more favorable POS tagsets for SMT than an existing POS tagset. In Table 1, REFs are at least comparable to, or better than, INDs except for Mono. This shows that REF achieves better performance by preserving the clues from the original POS tagset. However, REF may suffer sever overfitting problem for Mono since no bilingual information was incorporated. Further, when the full-level IPA POS tags12 were used in BS, the system achieved a 27.49% BLEU score, which is worse than the result using the second-level IPA POS tags. This means that manual refinement without bilingual information may also cause an overfitting problem in MT. 5 Discussion 5.1 Comparison to the IPA POS Tagset Table 2 shows the number of the IPA POS tags used in the experiments and the POS tags induced by the proposed models. This table shows that each induced tagset contains more POS tags than the IPA POS tagset. In the experimental data, some of Japanese verbs correspond to genuine English verbs, some are nominalized, and others correspond to English past participle verbs or present participle verbs which modify other words. Respective examples are “I use a card.”, “Using the index is faster.”, and “I explain using an example.”, where all the underlined words correspond to the same Japanese word, “用い”, whose IPA POS tag is a verb. Ind[s+P] in REF generated the POS tagset where the three types are assigned to separate POS groups. The Japanese particle “に” is sometimes attached to nouns to give them adverb roles. For 12377 types of full-level IPA POS tags were included in our experimental data. Tagging Dependency IND REF IND REF Original 90.37 93.62 Mono 90.75 88.04 91.77 91.51 Joint[s] 89.08 86.73 91.55 91.14 Joint[P] 80.54 79.98 91.06 91.29 Joint[s+P] 87.56 84.92 91.31 91.10 Ind[s] 87.62 84.33 92.06 92.58 Ind[P] 90.21 88.50 92.85 93.03 Ind[s+P] 89.57 86.12 92.96 92.78 Table 3: Tagging and Dependency Accuracy (%) example, “相互(mutual)  に” is translated as the adverb “mutually” in English. Other times, it is attached to words to make them the objects of verbs. For example, “彼(he)  に 与える (give)” is translated as “give him”. The POS tags by Ind[s+P] in REF discriminated the two types. These examples show that the proposed models can disambiguate POS tags that have different functions in English, whereas the IPA POS tagset treats them jointly. Thus, such discrimination improves the performance of a forest-to-string SMT. 5.2 Impact of Tagging and Dependency Accuracy The performance of our methods depends not only on the quality of the induced tag sets but also on the performance of the dependency parser learned in Step 3 of Section 4.1. We cannot directly evaluate the tagging accuracy of the parser trained through Step 3 because we do not have any data with induced POS tags other than the 10,000sentence data gained through Step 2. Thus we split the 10,000 data into the first 9,000 data for training and the remaining 1,000 for testing, and then a dependency parser was learned in the same way as in Step 3. Table 3 shows the results. Original is the performance of the parser learned from the training data with the original POS tagset. Note that the dependency accuracies are measured on the automatically parsed dependency trees, not on the syntactically correct gold standard trees. Thus Original achieved the best dependency accuracy. In Table 3, the performance for our bilinguallyinduced POSs, Joint and Ind, are lower than Original and Mono. It seems performing parsing and tagging with the bilingually-induced POS tagset is too difficult when only monolingual in848 formation is available to the parser. However, our bilingually-induced POSs, except for Joint[P], with the lower accuracies are more effective for SMT than the monolingually-induced POSs and the original POSs, as indicated in Table 1. The tagging accuracies for Joint[P] both in IND and REF are significantly lower than the others, while the dependency accuracies do not differ significantly. The lower tagging accuracies may directly reflect the lower translation qualities for Joint[P] in Table 1. 6 Conclusion We proposed a novel method for inducing POS tags for SMT. The proposed method is a nonparametric Bayesian method, which infers hidden states (i.e., POS tags) based on observations representing not only source words themselves but also aligned target words. Our experiments showed that a more favorable POS tagset can be induced by integrating aligned information, and furthermore, the POS tagset generated by the proposed method is more effective for SMT than an existing POS tagset (the IPA POS tagset). Even though we employed word alignment from GIZA++ with potential errors, large gains were achieved using our proposed method. We would like to investigate the influence of alignment errors in the future. In addition, we are planning to prove the effectiveness of our proposed method for language pairs other than Japanese-toEnglish. We are also planning to introduce our proposed method to other syntax-based SMT, such as a string-to-tree SMT and a tree-to-tree SMT. Acknowledgments We thank Isao Goto for helpful discussions and anonymous reviewers for valuable comments. We also thank Jun Hatori for helping us to apply his software, Corbit, to our induced POS tagsets. References Masayuki Asahara and Yuji Matsumoto. 2003. IPADIC User Manual. Technical report, Japan. Matthew J. Beal, Zoubin Ghahramani, and Carl E. Rasmussen. 2001. The Infinite Hidden Markov Model. In Advances in Neural Information Processing Systems, pages 577–584. Phil Blunsom and Trevor Cohn. 2011. A Hierarchical Pitman-Yor Process HMM for Unsupervised Part of Speech Induction. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 865–874. Trevor Cohn and Phil Blunsom. 2009. A Bayesian Model of Syntax-Directed Tree to String Grammar Induction. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 352–361. Yuan Ding and Martha Palmer. 2005. Machine Translation Using Probabilistic Synchronous Dependency Insertion Grammars. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, pages 541–548. Thomas S. Ferguson. 1973. A Bayesian Analysis of Some Nonparametric Problems. The Annals of Statistics, 1(2):209–230. Jenny Rose Finkel, Trond Grenager, and Christopher D. Manning. 2007. The Infinite Tree. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 272–279. Jurgen Van Gael, Yunus Saatci, Yee Whye Teh, and Zoubin Ghahramani. 2008. Beam Sampling for the Infinite Hidden Markov Model. In Proceedings of the 25th International Conference on Machine Learning, pages 1088–1095. Jurgen Van Gael, Andreas Vlachos, and Zoubin Ghahramani. 2009. The infinite HMM for unsupervised PoS tagging. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 2 - Volume 2, pages 678–687. Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve DeNeefe, Wei Wang, and Ignacio Thayer. 2006. Scalable Inference and Training of Context-Rich Syntactic Translation Models. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 961–968. Isao Goto, Bin Lu, Ka Po Chow, Eiichiro Sumita, and Benjamin K. Tsou. 2011. Overview of the Patent Machine Translation Task at the NTCIR-9 Workshop. In Proceedings of the 9th NTCIR Workshop, pages 559–578. Jun Hatori, Takuya Matsuzaki, Yusuke Miyao, and Jun’ichi Tsujii. 2011. Incremental Joint POS Tagging and Dependency Parsing in Chinese. In Proceedings of 5th International Joint Conference on Natural Language Processing, pages 1216–1224. Mark Hopkins and Jonathan May. 2011. Tuning as Ranking. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1352–1362. Liang Huang, Kevin Knight, and Aravind Joshi. 2006. A Syntax-Directed Translator with Extended Domain of Locality. In Proceedings of the Workshop on 849 Computationally Hard Problemsand Joint Inference in Speech and Language Processing, pages 1–8. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical Phrase-Based Translation. In Proceedings of the 2003 Human Language Technology Conference: North American Chapter of the Association for Computational Linguistics, pages 48–54. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constrantin, and Evan Herbst. 2007. Moses: Open Source Toolkit for Statistical Machine Translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics on Interactive Poster and Demonstration Sessions, pages 177–180. Philipp Koehn. 2004. Statistical Significance Tests for Machine Translation Evaluation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 388–395. Taku Kudo and Yuji Matsumoto. 2002. Japanese Dependency Analysis using Cascaded Chunking. In Proceedings of the 6th Conference on Natural Language Learning, pages 63–69. Percy Liang, Slav Petrov, Michael I. Jordan, and Dan Klein. 2007. The Infinite PCFG using Hierarchical Dirichlet Processes. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 688–697. Dekang Lin. 2004. A Path-based Transfer Model for Machine Translation. In Proceedings of the 20th International Conference on Computational Linguistics, pages 625–630. Dong C. Liu and Jorge Nocedal. 1989. On the limited memory BFGS method for large scale optimization. Mathematical Programming B, 45(3):503–528. Yang Liu, Qun Liu, and Shouxun Lin. 2006. Tree-toString Alignment Template for Statistical Machine Translation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 609–616. Yang Liu, Yajuan L¨u, and Qun Liu. 2009. Improving Tree-to-Tree Translation with Packed Forests. In Proceedings of the 47th Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, pages 558–566. Haitao Mi and Liang Huang. 2008. Forest-based Translation Rule Extraction. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 206–214. Haitao Mi and Qun Liu. 2010. Constituency to Dependency Translation with Forests. In Proceedings of the 48th Annual Conference of the Association for Computational Linguistics, pages 1433–1442. Toshiaki Nakazawa and Sadao Kurohashi. 2012. Alignment by Bilingual Generation and Monolingual Derivation. In Proceedings of the 24th International Conference on Computational Linguistics, pages 1963–1978. Radford M. Neal. 2003. Slice Sampling. Annals of Statistics, 31:705–767. Franz Josef Och and Hermann Ney. 2003. A Systematic Comparison of Various Statistical Alignment Models. Computational Linguistics, 29:19–51. Franz Josef Och. 2003. Minimum Error Rate Training in Statistical Machine Translation. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 160–167. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a Method for Automatic Evaluation of Machine Translation. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318. Chris Quirk, Arul Menezes, and Colin Cherry. 2005. Dependency Treelet Translation: Syntactically Informed Phrasal SMT. In Proceedings of the 43rd Annual Conference of the Association for Computational Linguistics, pages 271–279. Antti-Veikko Rosti, Bing Zhang, Spyros Matsoukas, and Richard Schwartz. 2011. Expected BLEU Training for Graphs: BBN System Description for WMT11 System Combination Task. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 159–165. Helmut Schmid. 1994. Probabilistic Part-of-Speech Tagging Using Decision Trees. In Proceedings of the International Conference on New Methods in Language Processing, pages 44–49. Jayaram Sethuraman. 1994. A Constructive Definition of Dirichlet Priors. Statistica Sinica, 4(2):639–650. Libin Shen, Jinxi Xu, and Ralph Weischedel. 2008. A New String-to-Dependency Machine Translation Algorithm with a Target Dependency Language Model. In Proceedings of the 46th Annual Conference of the Association for Computational Linguistics: Human Language Technologies, pages 577– 585. Kairit Sirts and Tanel Alum¨ae. 2012. A Hierarchical Dirichlet Process Model for Joint Part-of-Speech and Morphology Induction. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 407–416. 850 Yee Whye Teh, Michael I. Jordan, Matthew J. Beal, and David M. Blei. 2006. Hierarchical Dirichlet Processes. Journal of the American Statistical Association, 101(476):1566–1581. Taro Watanabe and Eiichiro Sumita. 2011. Machine Translation System Combination by Confusion Forest. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 1249–1257. Taro Watanabe. 2012. Optimized Online Rank Learning for Machine Translation. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 253–262. Min Zhang, Hongfei Jiang, Aiti Aw, Haizhou Li, Chew Lim Tan, and Sheng Li. 2008. A Tree Sequence Alignment-based Tree-to-Tree Translation Model. In Proceedings of the 46th Annual Conference of the Association for Computational Linguistics: Human Language Technologies, pages 559– 567. Hao Zhang, Licheng Fang, Peng Xu, and Xiaoyun Wu. 2011. Binarized Forest to String Translation. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 19–24. 851
2013
83
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 852–861, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Statistical Machine Translation Improves Question Retrieval in Community Question Answering via Matrix Factorization Guangyou Zhou, Fang Liu, Yang Liu, Shizhu He, and Jun Zhao National Laboratory of Pattern Recognition Institute of Automation, Chinese Academy of Sciences 95 Zhongguancun East Road, Beijing 100190, China {gyzhou,fliu,liuyang09,shizhu.he,jzhao}@nlpr.ia.ac.cn Abstract Community question answering (CQA) has become an increasingly popular research topic. In this paper, we focus on the problem of question retrieval. Question retrieval in CQA can automatically find the most relevant and recent questions that have been solved by other users. However, the word ambiguity and word mismatch problems bring about new challenges for question retrieval in CQA. State-of-the-art approaches address these issues by implicitly expanding the queried questions with additional words or phrases using monolingual translation models. While useful, the effectiveness of these models is highly dependent on the availability of quality parallel monolingual corpora (e.g., question-answer pairs) in the absence of which they are troubled by noise issue. In this work, we propose an alternative way to address the word ambiguity and word mismatch problems by taking advantage of potentially rich semantic information drawn from other languages. Our proposed method employs statistical machine translation to improve question retrieval and enriches the question representation with the translated words from other languages via matrix factorization. Experiments conducted on a real CQA data show that our proposed approach is promising. 1 Introduction With the development of Web 2.0, community question answering (CQA) services like Yahoo! Answers,1 Baidu Zhidao2 and WkiAnswers3 have attracted great attention from both academia and industry (Jeon et al., 2005; Xue et al., 2008; Adamic et al., 2008; Wang et al., 2009; Cao et al., 2010). In CQA, anyone can ask and answer questions on any topic, and people seeking information are connected to those who know the answers. As answers are usually explicitly provided by human, they can be helpful in answering real world questions. In this paper, we focus on the task of question retrieval. Question retrieval in CQA can automatically find the most relevant and recent questions (historical questions) that have been solved by other users, and then the best answers of these historical questions will be used to answer the users’ queried questions. However, question retrieval is challenging partly due to the word ambiguity and word mismatch between the queried questions and the historical questions in the archives. Word ambiguity often causes the retrieval models to retrieve many historical questions that do not match the users’ intent. This problem is also amplified by the high diversity of questions and users. For example, depending on different users, the word “interest” may refer to “curiosity”, or “a charge for borrowing money”. Another challenge is word mismatch between the queried questions and the historical questions. The queried questions may contain words that are different from, but related to, the words in the relevant historical questions. For example, if a queried question contains the word “company” but a relevant historical question instead contains the word “firm”, then there is a mismatch and the historical 1http://answers.yahoo.com/ 2http://zhidao.baidu.com/ 3http://wiki.answers.com/ 852 English Chinese word ambiguity How do I get a loan 我(wǒ) 如何(rúhé) 从(cóng) from a bank? 银 银 银行 行 行(yííínháááng) 贷款(dàikuǎn) ? How to reach the 如何(rúhé) 前往(qiánwǎng) bank of the river? 河 河 河岸 岸 岸(héééàààn) ? word mismatch company 公司(gōngsī) firm 公司(gōngsī) rheum 感冒(gǎnmào) catarrh 感冒(gǎnmào) Table 1: Google translate: some illustrative examples. question may not be easily distinguished from an irrelevant one. Researchers have proposed the use of wordbased translation models (Berger et al., 2000; Jeon et al., 2005; Xue et al., 2008; Lee et al., 2008; Bernhard and Gurevych, 2009) to solve the word mismatch problem. As a principle approach to capture semantic word relations, wordbased translation models are built by using the IBM model 1 (Brown et al., 1993) and have been shown to outperform traditional models (e.g., VSM, BM25, LM) for question retrieval. Besides, Riezler et al. (2007) and Zhou et al. (2011) proposed the phrase-based translation models for question and answer retrieval. The basic idea is to capture the contextual information in modeling the translation of phrases as a whole, thus the word ambiguity problem is somewhat alleviated. However, all these existing studies in the literature are basically monolingual approaches which are restricted to the use of original language of questions. While useful, the effectiveness of these models is highly dependent on the availability of quality parallel monolingual corpora (e.g., question-answer pairs) in the absence of which they are troubled by noise issue. In this work, we propose an alternative way to address the word ambiguity and word mismatch problems by taking advantage of potentially rich semantic information drawn from other languages. Through other languages, various ways of adding semantic information to a question could be available, thereby leading to potentially more improvements than using the original language only. Taking a step toward using other languages, we propose the use of translated representation by alternatively enriching the original questions with the words from other languages. The idea of improving question retrieval with statistical machine translation is based on the following two observations: (1) Contextual information is exploited during the translation from one language to another. For example in Table 1, English words “interest” and “bank” that have multiple meanings under different contexts are correctly addressed by using the state-of-the-art translation tool −−Google Translate.4 Thus, word ambiguity based on contextual information is naturally involved when questions are translated. (2) Multiple words that have similar meanings in one language may be translated into an unique word or a few words in a foreign language. For example in Table 1, English words such as “company” and “firm” are translated into “公司(gōngsī)”, “rheum” and “catarrh” are translated into “感冒(gǎnmào)” in Chinese. Thus, word mismatch problem can be somewhat alleviated by using other languages. Although Zhou et al. (2012) exploited bilingual translation for question retrieval and obtained the better performance than traditional monolingual translation models. However, there are two problems with this enrichment: (1) enriching the original questions with the translated words from other languages increases the dimensionality and makes the question representation even more sparse; (2) statistical machine translation may introduce noise, which can harm the performance of question retrieval. To solve these two problems, we propose to leverage statistical machine translation to improve question retrieval via matrix factorization. The remainder of this paper is organized as follows. Section 2 describes the proposed method by leveraging statistical machine translation to improve question retrieval via matrix factorization. Section 3 presents the experimental results. In section 4, we conclude with ideas for future research. 4http://translate.google.com/translate t 853 2 Our Approach 2.1 Problem Statement This paper aims to leverage statistical machine translation to enrich the question representation. In order to address the word ambiguity and word mismatch problems, we expand a question by adding its translation counterparts. Statistical machine translation (e.g., Google Translate) can utilize contextual information during the question translation, so it can solve the word ambiguity and word mismatch problems to some extent. Let L = {l1, l2, . . . , lP } denote the language set, where P is the number of languages considered in the paper, l1 denotes the original language (e.g., English) while l2 to lP are the foreign languages. Let D1 = {d(1) 1 , d(1) 2 , . . . , d(1) N } be the set of historical question collection in original language, where N is the number of historical questions in D1 with vocabulary size M1. Now we first translate each original historical question from language l1 into other languages lp (p ∈ [2, P]) by Google Translate. Thus, we can obtain D2, . . . , DP in different languages, and Mp is the vocabulary size of Dp. A question d(p) i in Dp is simply represented as a Mp dimensional vector d(p) i , in which each entry is calculated by tf-idf. The N historical questions in Dp are then represented in a Mp × N term-question matrix Dp = {d(p) 1 , d(p) 2 , . . . , d(p) N }, in which each row corresponds to a term and each column corresponds to a question. Intuitively, we can enrich the original question representation by adding the translated words from language l2 to lP , the original vocabulary size is increased from M1 to ∑P p=1 Mp. Thus, the term-question matrix becomes D = {D1, D2, . . . , DP } and D ∈ R(∑P p=1 Mp)×N. However, there are two problems with this enrichment: (1) enriching the original questions with the translated words from other languages makes the question representation even more sparse; (2) statistical machine translation may introduce noise.5 To solve these two problems, we propose to leverage statistical machine translation to improve question retrieval via matrix factorization. Figure 1 presents the framework of our proposed method, where qi represents a queried question, and qi is a vector representation of qi. 5Statistical machine translation quality is far from satisfactory in real applications. …… …… …… …… Historical Question Collection Representation Query Representation Figure 1: Framework of our proposed approach for question retrieval. 2.2 Model Formulation To tackle the data sparseness of question representation with the translated words, we hope to find two or more lower dimensional matrices whose product provides a good approximate to the original one via matrix factorization. Previous studies have shown that there is psychological and physiological evidence for parts-based representation in the human brain (Wachsmuth et al., 1994). The non-negative matrix factorization (NMF) is proposed to learn the parts of objects like text documents (Lee and Seung, 2001). NMF aims to find two non-negative matrices whose product provides a good approximation to the original matrix and has been shown to be superior to SVD in document clustering (Xu et al., 2003; Tang et al., 2012). In this paper, NMF is used to induce the reduced representation Vp of Dp, Dp is independent on {D1, D2, . . . , Dp−1, Dp+1, . . . , DP }. When ignoring the coupling between Vp, it can be solved by minimizing the objective function as follows: O1(Up, Vp) = min Up≥0,Vp≥0 ∥Dp −UpVp∥2 F (1) where ∥· ∥F denotes Frobenius norm of a matrix. Matrices Up ∈RMp×K and Vp ∈RK×N are the reduced representation for terms and questions in the K dimensional space, respectively. To reduce the noise introduced by statistical machine translation, we assume that Vp from language Dp (p ∈[2, P]) should be close to V1 854 from the original language D1. Based on this assumption, we minimize the distance between Vp (p ∈[2, P]) and V1 as follows: O2(Vp) = min Vp≥0 P ∑ p=2 ∥Vp −V1∥2 F (2) Combining equations (1) and (2), we get the following objective function: O(U1, . . . , UP ; V1, . . . , VP ) (3) = P ∑ p=1 ∥Dp −UpVp∥2 F + P ∑ p=2 λp∥Vp −V1∥2 F where parameter λp (p ∈[2, P]) is used to adjust the relative importance of these two components. If we set a small value for λp, the objective function behaves like the traditional NMF and the importance of data sparseness is emphasized; while a big value of λp indicates Vp should be very closed to V1, and equation (3) aims to remove the noise introduced by statistical machine translation. By solving the optimization problem in equation (4), we can get the reduced representation of terms and questions. min O(U1, . . . , UP ; V1, . . . , VP ) (4) subject to : Up ≥0, Vp ≥0, p ∈[1, P] 2.3 Optimization The objective function O defined in equation (4) performs data sparseness and noise removing simultaneously. There are 2P coupling components in O, and O is not convex in both U and V together. Therefore it is unrealistic to expect an algorithm to find the global minima. In the following, we introduce an iterative algorithm which can achieve local minima. In our optimization framework, we optimize the objective function in equation (4) by alternatively minimizing each component when the remaining 2P −1 components are fixed. This procedure is summarized in Algorithm 1. 2.3.1 Update of Matrix Up Holding V1, . . . , VP and U1, . . . , Up−1, Up+1, . . . , UP fixed, the update of Up amounts to the following optimization problem: min Up≥0 ∥Dp −UpVp∥2 F (5) Algorithm 1 Optimization framework Input: Dp ∈Rmp×N, p ∈[1, P] 1: for p = 1 : P do 2: V(0) p ∈RK×N ←random matrix 3: for t = 1 : T do  T is iteration times 4: U(t) p ←UpdateU(Dp, V(t−1) p ) 5: V(t) p ←UpdateV(Dp, U(t) p ) 6: end for 7: return U(T) p , V(T) p 8: end for Algorithm 2 Update Up Input: Dp ∈RMp×N, Vp ∈RK×N 1: for i = 1 : Mp do 2: ¯u(p)∗ i = (VpVT p )−1Vp¯d(p) i 3: end for 4: return Up Let ¯d(p) i = (d(p) i1 , . . . , d(p) iK)T and ¯u(p) i = (u(p) i1 , . . . , u(p) iK)T be the column vectors whose entries are those of the ith row of Dp and Up respectively. Thus, the optimization of equation (5) can be decomposed into Mp optimization problems that can be solved independently, with each corresponding to one row of Up: min ¯u(p) i ≥0 ∥¯d(p) i −VT p ¯u(p) i ∥2 2 (6) for i = 1, . . . , Mp. Equation (6) is a standard least squares problems in statistics and the solution is: ¯u(p)∗ i = (VpVT p )−1Vp¯d(p) i (7) Algorithm 2 shows the procedure. 2.3.2 Update of Matrix Vp Holding U1, . . . , UP and V1, . . . , Vp−1, Vp+1, . . . , VP fixed, the update of Vp amounts to the optimization problem divided into two categories. if p ∈[2, P], the objective function can be written as: min Vp≥0 ∥Dp −UpVp∥2 F + λp∥Vp −V1∥2 F (8) if p = 1, the objective function can be written as: min Vp≥0 ∥Dp −UpVp∥2 F + λp∥Vp∥2 F (9) 855 Let d(p) j be the jth column vector of Dp, and v(p) j be the jth column vector of Vp, respectively. Thus, equation (8) can be rewritten as: min {v(p) j ≥0} N ∑ j=1 ∥d(p) j −Upv(p) j ∥2 2+ N ∑ j=1 λp∥v(p) j −v(1) j ∥2 2 (10) which can be decomposed into N optimization problems that can be solved independently, with each corresponding to one column of Vp: min v(p) j ≥0 ∥d(p) j −Upv(p) j ∥2 2+λp∥v(p) j −v(1) j ∥2 2 (11) for j = 1, . . . , N. Equation (12) is a least square problem with L2 norm regularization. Now we rewrite the objective function in equation (12) as L(v(p) j ) = ∥d(p) j −Upv(p) j ∥2 2 + λp∥vp j −v(1) j ∥2 2 (12) where L(v(1) j ) is convex, and hence has a unique solution. Taking derivatives, we obtain: ∂L(v(p) j ) ∂v(p) j = −2UT p (d(p) j −Upv(p) j )+2λp(v(p) j −v(1) j ) (13) Forcing the partial derivative to be zero leads to v(p)∗ j = (UT p Up + λpI)−1(UT p d(p) j + λpv(1) j ) (14) where p ∈[2, P] denotes the foreign language representation. Similarly, the solution of equation (9) is: v(p)∗ j = (UT p Up + λpI)−1UT p d(p) j (15) where p = 1 denotes the original language representation. Algorithm 3 shows the procedure. 2.4 Time Complexity Analysis In this subsection, we discuss the time complexity of our proposed method. The optimization ¯u(p) i using Algorithm 2 should calculate VpVT p and Vp¯d(p) i , which takes O(NK2 + NK) operations. Therefore, the optimization Up takes O(NK2 + MpNK) operations. Similarly, the time complexity of optimization Vi using Algorithm 3 is O(MpK2 + MpNK). Another time complexity is the iteration times T used in Algorithm 1 and the total number of Algorithm 3 Update Vp Input: Dp ∈RMp×N, Up ∈RMp×K 1: Σ ←(UT p Up + λpI)−1 2: Φ ←UT p Dp 3: if p = 1 then 4: for j = 1 : N do 5: v(p) j ←Σϕj, ϕj is the jth column of Φ 6: end for 7: end if 8: return V1 9: if p ∈[2, P] then 10: for j = 1 : N do 11: v(p) j ←Σ(ϕj + λpv(1) j ) 12: end for 13: end if 14: return Vp languages P, the overall time complexity of our proposed method is: P ∑ p=1 T × O(NK2 + MpK2 + 2MpNK) (16) For each language Dp, the size of vocabulary Mp is almost constant as the number of questions increases. Besides, K ≪min(Mp, N), theoretically, the computational time is almost linear with the number of questions N and the number of languages P considered in the paper. Thus, the proposed method can be easily adapted to the largescale information retrieval task. 2.5 Relevance Ranking The advantage of incorporating statistical machine translation in relevance ranking is to reduce “word ambiguity” and “word mismatch” problems. To do so, given a queried question q and a historical question d from Yahoo! Answers, we first translate q and d into other foreign languages (e.g., Chinese, French etc.) and get the corresponding translated representation qi and di (i ∈[2, P]), where P is the number of languages considered in the paper. For queried question q = q1, we represent it in the reduced space: vq1 = arg min v≥0 ∥q1 −U1v∥2 2 + λ1∥v∥2 2 (17) where vector q1 is the tf-idf representation of queried question q1 in the term space. Similarly, for historical question d = d1 (and its tf-idf representation d1 in the term space) we represent it in the reduced space as vd1. 856 The relevance score between the queried question q1 and the historical question d1 in the reduced space is, then, calculated as the cosine similarity between vq1 and vd1: s(q1, d1) = < vq1, vd1 > ∥vq1∥2 · ∥vd1∥2 (18) For translated representation qi (i ∈[2, P]), we also represent it in the reduced space: vqi = arg min v≥0 ∥qi−Uiv∥2 2+λi∥v−vq1∥2 2 (19) where vector qi is the tf-idf representation of qi in the term space. Similarly, for translated representation di (and its tf-idf representation di in the term space) we also represent it in the reduced space as vdi. The relevance score s(qi, di) between qi and di in the reduced space can be calculated as the cosine similarity between vqi and vdi. Finally, we consider learning a relevance function of the following general, linear form: Score(q, d) = θT · Φ(q, d) (20) where feature vector Φ(q, d) = (sV SM(q, d), s(q1, d1), s(q2, d2), . . . , s(qP , dP )), and θ is the corresponding weight vector, we optimize this parameter for our evaluation metrics directly using the Powell Search algorithm (Paul et al., 1992) via cross-validation. sV SM(q, d) is the relevance score in the term space and can be calculated using Vector Space Model (VSM). 3 Experiments 3.1 Data Set and Evaluation Metrics We collect the data set from Yahoo! Answers and use the getByCategory function provided in Yahoo! Answers API6 to obtain CQA threads from the Yahoo! site. More specifically, we utilize the resolved questions and the resulting question repository that we use for question retrieval contains 2,288,607 questions. Each resolved question consists of four parts: “question title”, “question description”, “question answers” and “question category”. For question retrieval, we only use the “question title” part. It is assumed that question title already provides enough semantic information for understanding the users’ information needs (Duan et al., 2008). There are 26 categories 6http://developer.yahoo.com/answers Category #Size Category # Size Arts & Humanities 86,744 Home & Garden 35,029 Business & Finance 105,453 Beauty & Style 37,350 Cars & Transportation 145,515 Pet 54,158 Education & Reference 80,782 Travel 305,283 Entertainment & Music 152,769 Health 132,716 Family & Relationships 34,743 Sports 214,317 Politics & Government 59,787 Social Science 46,415 Pregnancy & Parenting 43,103 Ding out 46,933 Science & Mathematics 89,856 Food & Drink 45,055 Computers & Internet 90,546 News & Events 20,300 Games & Recreation 53,458 Environment 21,276 Consumer Electronics 90,553 Local Businesses 51,551 Society & Culture 94,470 Yahoo! Products 150,445 Table 2: Number of questions in each first-level category. at the first level and 1,262 categories at the leaf level. Each question belongs to a unique leaf category. Table 2 shows the distribution across firstlevel categories of the questions in the archives. We use the same test set in previous work (Cao et al., 2009; Cao et al., 2010). This set contains 252 queried questions and can be freely downloaded for research communities.7 The original language of the above data set is English (l1) and then they are translated into four other languages (Chinese (l2), French (l3), German (l4), Italian (l5)), thus the number of language considered is P = 5) by using the state-of-the-art translation tool −−Google Translate. Evaluation Metrics: We evaluate the performance of question retrieval using the following metrics: Mean Average Precision (MAP) and Precision@N (P@N). MAP rewards methods that return relevant questions early and also rewards correct ranking of the results. P@N reports the fraction of the top-N questions retrieved that are relevant. We perform a significant test, i.e., a ttest with a default significant level of 0.05. We tune the parameters on a small development set of 50 questions. This development set is also extracted from Yahoo! Answers, and it is not included in the test set. For parameter K, we do an experiment on the development set to determine the optimal values among 50, 100, 150, · · · , 300 in terms of MAP. Finally, we set K = 100 in the experiments empirically as this setting yields the best performance. For parameter λ1, we set λ1 = 1 empirically, while for parameter λi (i ∈[2, P]), we set λi = 0.25 empirically and ensure that ∑ i λi = 1. 7http://homepages.inf.ed.ac.uk/gcong/qa/ 857 # Methods MAP P@10 1 VSM 0.242 0.226 2 LM 0.385 0.242 3 Jeon et al. (2005) 0.405 0.247 4 Xue et al. (2008) 0.436 0.261 5 Zhou et al. (2011) 0.452 0.268 6 Singh (2012) 0.450 0.267 7 Zhou et al. (2012) 0.483 0.275 8 SMT + MF (P = 2, l1, l2) 0.527 0.284 9 SMT + MF (P = 5) 0.564 0.291 Table 3: Comparison with different methods for question retrieval. 3.2 Question Retrieval Results Table 3 presents the main retrieval performance. Row 1 and row 2 are two baseline systems, which model the relevance score using VSM (Cao et al., 2010) and language model (LM) (Zhai and Lafferty, 2001; Cao et al., 2010) in the term space. Row 3 and row 6 are monolingual translation models to address the word mismatch problem and obtain the state-of-the-art performance in previous work. Row 3 is the word-based translation model (Jeon et al., 2005), and row 4 is the wordbased translation language model, which linearly combines the word-based translation model and language model into a unified framework (Xue et al., 2008). Row 5 is the phrase-based translation model, which translates a sequence of words as whole (Zhou et al., 2011). Row 6 is the entitybased translation model, which extends the wordbased translation model and explores strategies to learn the translation probabilities between words and the concepts using the CQA archives and a popular entity catalog (Singh, 2012). Row 7 is the bilingual translation model, which translates the English questions from Yahoo! Answers into Chinese questions using Google Translate and expands the English words with the translated Chinese words (Zhou et al., 2012). For these previous work, we use the same parameter settings in the original papers. Row 8 and row 9 are our proposed method, which leverages statistical machine translation to improve question retrieval via matrix factorization. In row 8, we only consider two languages (English and Chinese) and translate English questions into Chinese using Google Translate in order to compare with Zhou et al. (2012). In row 9, we translate English questions into other four languages. There are some clear trends in the result of Table 3: (1) Monolingual translation models significantly outperform the VSM and LM (row 1 and row 2 vs. row 3, row 4, row 5 and row 6). (2) Taking advantage of potentially rich semantic information drawn from other languages via statistical machine translation, question retrieval performance can be significantly improved (row 3, row 4, row 5 and row 6 vs. row 7, row 8 and row 9, all these comparisons are statistically significant at p < 0.05). (3) Our proposed method (leveraging statistical machine translation via matrix factorization, SMT + MF) significantly outperforms the bilingual translation model of Zhou et al. (2012) (row 7 vs. row 8, the comparison is statistically significant at p < 0.05). The reason is that matrix factorization used in the paper can effectively solve the data sparseness and noise introduced by the machine translator simultaneously. (4) When considering more languages, question retrieval performance can be further improved (row 8 vs. row 9). Note that Wang et al. (2009) also addressed the word mismatch problem for question retrieval by using syntactic tree matching. We do not compare with Wang et al. (2009) in Table 3 because previous work (Ming et al., 2010) demonstrated that word-based translation language model (Xue et al., 2008) obtained the superior performance than the syntactic tree matching (Wang et al., 2009). Besides, some other studies attempt to improve question retrieval with category information (Cao et al., 2009; Cao et al., 2010), label ranking (Li et al., 2011) or world knowledge (Zhou et al., 2012). However, their methods are orthogonal to ours, and we suspect that combining the category information or label ranking into our proposed method might get even better performance. We leave it for future research. 3.3 Impact of the Matrix Factorization Our proposed method (SMT + MF) can effectively solve the data sparseness and noise via matrix factorization. To further investigate the impact of the matrix factorization, one intuitive way is to expand the original questions with the translated words from other four languages, without considering the data sparseness and noise introduced by machine translator. We compare our SMT + MF with this intuitive enriching method (SMT + IEM). Besides, we also employ our proposed matrix factorization to the original question representation (VSM + MF). Table 4 shows the comparison. 858 # Methods MAP P@10 1 VSM 0.242 0.226 2 VSM + MF 0.411 0.253 3 SMT + IEM (P = 5) 0.495 0.280 4 SMT + MF (P = 5) 0.564 0.291 Table 4: The impact of matrix factorization. (1) Our proposed matrix factorization can significantly improve the performance of question retrieval (row 1 vs. row2; row3 vs. row4, the improvements are statistically significant at p < 0.05). The results indicate that our proposed matrix factorization can effectively address the issues of data spareness and noise introduced by statistical machine translation. (2) Compared to the relative improvements of row 3 and row 4, the relative improvements of row 1 and row 2 is much larger. The reason may be that although matrix factorization can be used to reduce dimension, it may impair the meaningful terms. (3) Compared to VSM, the performance of SMT + IEM is significantly improved (row 1 vs. row 3), which supports the motivation that the word ambiguity and word mismatch problems could be partially addressed by Google Translate. 3.4 Impact of the Translation Language One of the success of this paper is to take advantage of potentially rich semantic information drawn from other languages to solve the word ambiguity and word mismatch problems. So we construct a dummy translator (DT) that translates an English word to itself. Thus, through this translation, we do not add any semantic information into the original questions. The comparison is presented in Table 5. Row 1 (DT + MF) represents integrating two copies of English questions with our proposed matrix factorization. From Table 5, we have several different findings: (1) Taking advantage of potentially rich semantic information drawn from other languages can significantly improve the performance of question retrieval (row 1 vs. row 2, row 3, row 4 and row 5, the improvements relative to DT + MF are statistically significant at p < 0.05). (2) Different languages contribute unevenly for question retrieval (e.g., row 2 vs. row 3). The reason may be that the improvements of leveraging different other languages depend on the quality of machine translation. For example, row 3 # Methods MAP 1 DT + MF (l1, l1) 0.352 2 SMT + MF (P = 2, l1, l2) 0.527 3 SMT + MF (P = 2, l1, l3) 0.553 4 SMT + MF (P = 2, l1, l4) 0.536 5 SMT + MF (P = 2, l1, l5) 0.545 6 SMT + MF (P = 3, l1, l2, l3) 0.559 7 SMT + MF (P = 4, l1, l2, l3, l4) 0.563 8 SMT + MF (P = 5, l1, l2, l3, l4, l5) 0.564 Table 5: The impact of translation language. Method Translation MAP SMT + MF (P = 2, l1, l2) Dict 0.468 GTrans 0.527 Table 6: Impact of the contextual information. is better than row 2 because the translation quality of English-French is much better than EnglishChinese. (3) Using much more languages does not seem to produce significantly better performance (row 6 and row 7 vs. row 8). The reason may be that inconsistency between different languages may exist due to statistical machine translation. 3.5 Impact of the Contextual Information In this paper, we translate the English questions into other four languages using Google Translate (GTrans), which takes into account contextual information during translation. If we translate a question word by word, it discards the contextual information. We would expect that such a translation would not be able to solve the word ambiguity problem. To investigate the impact of contextual information for question retrieval, we only consider two languages and translate English questions into Chinese using an English to Chinese lexicon (Dict) in StarDict8. Table 6 shows the experimental results, we can see that the performance is degraded when the contextual information is not considered for the translation of questions. The reason is that GTrans is context-dependent and thus produces different translated Chinese words depending on the context of an English word. Therefore, the word ambiguity problem can be solved during the English-Chinese translation. 4 Conclusions and Future Work In this paper, we propose to employ statistical machine translation to improve question retrieval and 8StarDict is an open source dictionary software, available at http://stardict.sourceforge.net/. 859 enrich the question representation with the translated words from other languages via matrix factorization. Experiments conducted on a real CQA data show some promising findings: (1) the proposed method significantly outperforms the previous work for question retrieval; (2) the proposed matrix factorization can significantly improve the performance of question retrieval, no matter whether considering the translation languages or not; (3) considering more languages can further improve the performance but it does not seem to produce significantly better performance; (4) different languages contribute unevenly for question retrieval; (5) our proposed method can be easily adapted to the large-scale information retrieval task. As future work, we plan to incorporate the question structure (e.g., question topic and question focus (Duan et al., 2008)) into the question representation for question retrieval. We also want to further investigate the use of the proposed method for other kinds of data set, such as categorized questions from forum sites and FAQ sites. Acknowledgments This work was supported by the National Natural Science Foundation of China (No. 61070106, No. 61272332 and No. 61202329), the National High Technology Development 863 Program of China (No. 2012AA011102), the National Basic Research Program of China (No. 2012CB316300), We thank the anonymous reviewers for their insightful comments. We also thank Dr. Gao Cong for providing the data set and Dr. Li Cai for some discussion. References L. Adamic, J. Zhang, E. Bakshy, and M. Ackerman. 2008. Knowledge sharing and yahoo answers: everyone knows and something. In Proceedings of WWW. A. Berger, R. Caruana, D. Cohn, D. Freitag, and V. Mittal. 2000. Bridging the lexical chasm: statistical approach to answer-finding. In Proceedings of SIGIR, pages 192-199. D. Bernhard and I. Gurevych. 2009. Combining lexical semantic resources with question & answer archives for translation-based answer finding. In Proceedings of ACL, pages 728-736. P. F. Brown, V. J. D. Pietra, S. A. D. Pietra, and R. L. Mercer. 1993. The mathematics of statistical machine translation: parameter estimation. Computational Linguistics, 19(2):263-311. X. Cao, G. Cong, B. Cui, C. Jensen, and C. Zhang. 2009. The use of categorization information in language models for question retrieval. In Proceedings of CIKM, pages 265-274. X. Cao, G. Cong, B. Cui, and C. Jensen. 2010. A generalized framework of exploring category information for question retrieval in community question answer archives. In Proceedings of WWW, pages 201-210. H. Duan, Y. Cao, C. Y. Lin, and Y. Yu. 2008. Searching questions by identifying questions topics and question focus. In Proceedings of ACL, pages 156-164. C. L. Lawson and R. J. Hanson. 1974. Solving least squares problems. Prentice-Hall. J. -T. Lee, S. -B. Kim, Y. -I. Song, and H. -C. Rim. 2008. Bridging lexical gaps between queries and questions on large online Q&A collections with compact translation models. In Proceedings of EMNLP, pages 410-418. W. Wang, B. Li, and I. King. 2011. Improving question retrieval in community question answering with label ranking. In Proceedings of IJCNN, pages 349356. D. D. Lee and H. S. Seung. 2001. Algorithms for non-negative matrix factorization. In Proceedings of NIPS. Z. Ming, K. Wang, and T. -S. Chua. 2010. Prototype hierarchy based clustering for the categorization and navigation of web collections. In Proceedings of SIGIR, pages 2-9. J. Jeon, W. Croft, and J. Lee. 2005. Finding similar questions in large question and answer archives. In Proceedings of CIKM, pages 84-90. C. Paige and M. Saunders. 1982. LSQR: an algorithm for sparse linear equations and sparse least squares. ACM Transaction on Mathematical Software, 8(1):43-71. W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery. 1992. Numerical Recipes In C. Cambridge Univ. Press. S. Riezler, A. Vasserman, I. Tsochantaridis, V. Mittal, and Y. Liu. 2007. Statistical machine translation for query expansion in answer retrieval. In Proceedings of ACL, pages 464-471. A. Singh. 2012. Entity based q&a retrieval. In Proceedings of EMNLP-CoNLL, pages 1266-1277. J. Tang, X. Wang, H. Gao, X. Hu, and H. Liu. 2012. Enriching short text representation in microblog for clustering. Front. Comput., 6(1):88-101. 860 E. Wachsmuth, M. W. Oram, and D. I. Perrett. 1994. Recognition of objects and their component parts: responses of single units in the temporal cortex of teh macaque. Cerebral Cortex, 4:509-522. K. Wang, Z. Ming, and T-S. Chua. 2009. A syntactic tree matching approach to find similar questions in community-based qa services. In Proceedings of SIGIR, pages 187-194. B. Wang, X. Wang, C. Sun, B. Liu, and L. Sun. 2010. Modeling semantic relevance for question-answer pairs in web social communities. In Proceedings of ACL, pages 1230-1238. W. Xu, X. Liu, and Y. Gong. 2003. Document clustering based on non-negative matrix factorization. In Proceedings of SIGIR, pages 267-273. X. Xue, J. Jeon, and W. B. Croft. 2008. Retrieval models for question and answer archives. In Proceedings of SIGIR, pages 475-482. C. Zhai and J. Lafferty. 2001. A study of smooth methods for language models applied to ad hoc information retrieval. In Proceedings of SIGIR, pages 334342. G. Zhou, L. Cai, J. Zhao, and K. Liu. 2011. Phrasebased translation model for question retrieval in community question answer archives. In Proceedings of ACL, pages 653-662. G. Zhou, K. Liu, and J. Zhao. 2012. Exploiting bilingual translation for question retrieval in communitybased question answering. In Proceedings of COLING, pages 3153-3170. G. Zhou, Y. Liu, F. Liu, D. Zeng, and J. Zhao. 2013. Improving Question Retrieval in Community Question Answering Using World Knowledge. In Proceedings of IJCAI. 861
2013
84
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 862–872, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Improved Lexical Acquisition through DPP-based Verb Clustering Roi Reichart University of Cambridge, UK [email protected] Anna Korhonen University of Cambridge, UK [email protected] Abstract Subcategorization frames (SCFs), selectional preferences (SPs) and verb classes capture related aspects of the predicateargument structure. We present the first unified framework for unsupervised learning of these three types of information. We show how to utilize Determinantal Point Processes (DPPs), elegant probabilistic models that are defined over the possible subsets of a given dataset and give higher probability mass to high quality and diverse subsets, for clustering. Our novel clustering algorithm constructs a joint SCF-DPP DPP kernel matrix and utilizes the efficient sampling algorithms of DPPs to cluster together verbs with similar SCFs and SPs. We evaluate the induced clusters in the context of the three tasks and show results that are superior to strong baselines for each 1. 1 Introduction Verb classes (VCs), subcategorization frames (SCFs) and selectional preferences (SPs) capture different aspects of predicate-argument structure. SCFs describe the syntactic realization of verbal predicate-argument structure, SPs capture the semantic preferences verbs have for their arguments and VCs in the Levin (1993) tradition provide a shared level of abstraction for verbs that share many aspects of their syntactic and semantic behavior. These three of types of information have proved useful for Natural Language Processing (NLP) 1The source code of the clustering algorithms and evaluation is submitted with this paper and will be made publicly available upon acceptance of the paper. tasks which require information about predicateargument structure, including parsing (Shi and Mihalcea, 2005; Cholakov and van Noord, 2010; Zhou et al., 2011), semantic role labeling (Swier and Stevenson, 2004; Dang, 2004; Bharati et al., 2005; Moschitti and Basili, 2005; zap, 2008; Zapirain et al., 2009), and word sense disambiguation (Dang, 2004; Thater et al., 2010; ´O S´eaghdha and Korhonen, 2011), among many others. Because lexical information is highly sensitive to domain variation, approaches that can identify VCs, SCFs and SPs in corpora have become increasingly popular, e.g. (O’Donovan et al., 2005; Schulte im Walde, 2006; Erk, 2007; Preiss et al., 2007; Van de Cruys, 2009; Reisinger and Mooney, 2011; Sun and Korhonen, 2011; Lippincott et al., 2012). The task of SCF induction involves identifying the arguments of a verb lemma and generalizing about the frames (i.e. SCFs) taken by the verb, where each frame includes a number of arguments and their syntactic types. For example, in (1), the verb ”show” takes the frame SUBJ-DOBJCCOMP (subject, direct object, and clausal complement). (1) [A number of SCF acquisition papers]SUBJ [show]VERB [their readers]DOBJ [which features are most valuable for the acquisition process]CCOMP. SP induction involves identifying and classifying the lexical items in a given argument slot. In sentence (2), for example, the verb ”show” takes the frame SUBJ-DOBJ. The direct object in this frame is likely to be inanimate. (2) [Most SCF and SP acquisition papers]SUBJ, 862 [show]VERB [no evidence to the usefulness of joint learning leaning for these tasks]DOBJ. Finally, VC induction involves clustering together verbs with similar meaning, reflected in similar SCFs and SPs. For example, ”show” in the above examples could get clustered together with ”demonstrate” and ”indicate”. Because these challenging tasks capture complementary information about predicate argument structure, they should be able to inform and support each other. Recently, researchers have begun to investigate the benefits of their joint learning. Schulte im Walde et al. (2008) integrated SCF and VC acquisition and used it for WordNet-based SP classification. ´O S´eaghdha (2010) presented a “dual-topic” model for SPs that induces also verb clusters. Both works reported SP evaluation with promising results. Lippincott et al. (2012) presented a joint model for inducing simple syntactic frames and VCs. They reported high accuracy results on VCs. de Cruys et al. (2012) introduced a joint model for SCF and SP acquisition. They evaluated both the SCFs and SPs, obtaining reasonable result on both tasks. In this paper, we present the first unified framework for unsupervised learning of the three types of information - SCFs, SPs and VCs. Our framework is based on Determinantal Point Processes (DPPs, (Kulesza, 2012; Kulesza and Taskar, 2012c)), elegant probabilistic models that are defined over the possible subsets of a given dataset and give higher probability mass to high quality and diverse subsets. We first show how individual-task DPP kernel matrices can be naturally combined to construct a joint kernel. We use this to construct a joint SCFSP kernel. We then introduce a novel clustering algorithm based on iterative DPP sampling which can (contrary to other probabilistic frameworks such as Markov random fields) be performed both accurately and efficiently. When defined over the joint SCF and SP kernel, this new algorithm can be used to induce VCs that are valuable for both tasks. We also contribute by evaluating the value of the clusters induced by our model for the acquisition of the three information types. Our evaluation against a well-known VC gold standard shows that our clustering model outperforms the state-of-theart verb clustering algorithm of Sun and Korhonen (2009), in our setup where no manually created SCF or SP data is available. Our evaluation against a well-known SCF gold standard and in the context of SP disambiguation tasks shows results that are superior to strong baselines, demonstrating the benefit our approach. 2 Previous Work SCF acquisition Most current works induce SCFs from the output of an unlexicalized parser (i.e. a parser trained without SCF annotations) using hand-written rules (Briscoe and Carroll, 1997; Korhonen, 2002; Preiss et al., 2007) or grammatical relation (GR) co-occurrence statistics (O’Donovan et al., 2005; Chesley and Salmon-Alt, 2006; Ienco et al., 2008; Messiant et al., 2008; Lenci et al., 2008; Altamirano and Alonso i Alemany, 2010; Kawahara and Kurohashi, 2010). Only a handful of SCF induction works are unsupervised. Carroll and Rooth (1996) applied an EM-based approach to a context-free grammar based model, Dkebowski (2009) used point-wise co-occurrence of arguments in parsed Polish data and Lippincott et al. (2012) presented a Bayesian network model for syntactic frame induction that identifies SPs on argument types. However, the frames induced by Lippincott et al. (2012) do not capture sets of arguments for verbs so are far simpler than traditional SCFs. Current approaches to SCF acquisition suffer from lack of semantic information which is needed to guide the purely syntax-driven acquisition process. Previous works have showed the benefit of hand-coded semantic information in SCF acquisition (Korhonen, 2002). We will address this problem in an unsupervised way: our approach is to consider SCFs together with semantic SPs through VCs which generalize over syntactically and semantically similar verbs. SP acquisition Considerable research has been conducted on SP acquisition, with a variety of unsupervised models proposed for this task that use no hand-crafted information during training. The latter approaches include latent variable models ( ´O S´eaghdha, 2010; Ritter and Etzioni, 2010; Reisinger and Mooney, 2011), distributional similarity methods (Bhagat et al., 2007; Basili et al., 2007; Erk, 2007) and methods based on non-negative tensor factorization (Van de Cruys, 2009). These works use a variety of linguistic features in the acquisition process but none of them 863 integrates the three information types covered in our work. Verb clustering A variety of VC approaches have been proposed in the literature. These include syntactic, semantic and mixed syntacticsemantic classifications (Grishman et al., 1994; Miller, 1995; Baker et al., 1998; Palmer et al., 2005; Schuler, 2006; Hovy et al., 2006). We focus on Levin style classes (Levin, 1993) which are defined in terms of diathesis alternations and capture generalizations over a range of syntactic and semantic properties. Previous unsupervised VC acquisition approaches clustered a variety of linguistic features using different (e.g. K-means and spectral) algorithms (Schulte im Walde, 2006; Joanis et al., 2008; Sun et al., 2008; Li and Brew, 2008; Korhonen et al., 2008; Sun and Korhonen, 2009; Vlachos et al., 2009; Sun and Korhonen, 2011). The linguistic features included SCFs and SPs, but these were induced separately and then feeded as features to the clustering algorithm. Our framework combines together SCF-motivated and SP-motivated kernel matrices , and uses the joint kernel to induce verb clusters which are likely to be highly relevant for both tasks. Importantly, no manual or automatic system for SCF or SP acquisition has been utilized when constructing the kernel matrices, we only consider features extracted from the output of an unlexicalized parser. Our approach hence provides a framework for acquiring valuable information for the three tasks together. Joint Modeling A small number of works have recently investigated joint approaches to SCFs, SPs and VCs. Each of them has addressed only a subset of the tasks and all but one have evaluated the performance in the context of one task only. ´O S´eaghdha (2010) presented a “dual-topic” model for SPs that induces VCs, reporting evaluation of SPs only. Lippincott et al. (2012) presented a Bayesian network model for syntactic frame (rather than full SCF) induction that induces VCs. Only VCs are evaluated. de Cruys et al. (2012) presented a joint unsupervised model of SCF and SP acquisition based on non-negative tensor factorization. Both SCFs and SPs were evaluated. Finally, the model of Schulte im Walde et al. (2008) addresses the three types of information but SP parameters are estimated with a WordNet based method and only the SPs are evaluated. Although evaluation of these recent joint models has been partial, the results have been encouraging and further motivate the development of a framework that acquires the three types of information together. 3 The Unified Framework In this section we present our unified framework. Our idea is to utilize DPPs for verb clustering that informs both SCF and SP acquisition. DPPs define a probability distribution over the possible subsets of a given set. These models assign higher probability mass to subsets that are both high quality and diverse. Our novel clustering algorithm makes use of three DPP properties that are appealing for our purpose: (1) The existence of efficient sampling algorithms for these models, which enable tractable sampling of high quality and diverse verb subsets; (2) Such verb subsets form natural high quality seeds for hierarchical clustering; and (3) Given individual-task DPP kernel matrices there are various simple and natural ways to combine them into a new DPP kernel matrix. Individual task DPP kernels represent (i) the quality of a data point (verb) as its average featurebased similarity with the other points in the data set and (ii) the divergence between a pair of points as the inverse similarity between them. For different tasks, different feature sets are used for the kernel construction. The high quality and diverse subsets sampled from the DPP model are considered good cluster seeds as they are likely to be relatively uniformly spread and to provide good coverage of the data set. The algorithm induces an hierarchical clustering, which is particularly suitable for semantic tasks, where a set of clusters that share a parent consists of pure members (i.e. most of the points in each cluster member belong to the same gold cluster) and together provide good coverage of the verb space. After a brief description of the Determinantal Point Processes (DPP) framework (Section 3.1), we discuss the construction of the joint DPP kernel, given a kernel for each individual task, In section 3.3 we present the DPP-Cluster clustering algorithm. 3.1 Determinantal Point Processes Determinantal point processes (DPPs) are elegant probabilistic models of repulsion that offer efficient and exact algorithms for sampling, marginalization, conditioning, and other inference tasks. Recently (Kulesza, 2012; Kulesza and Taskar, 864 2012c) introduced them to the machine learning community and demonstrated their usefulness for a variety of tasks including document summarization, image search, modeling non-overlapping human poses in images and video and automatically building timelines of important news stories (Kulesza and Taskar, 2010; Kulesza and Taskar, 2012a; Gillenwater et al., 2012; Kulesza and Taskar, 2012b). Below we provide a brief description of the framework, a comprehensive survey can be found in (Kulesza and Taskar, 2012c). Given a set of items Y = {y1, . . . , yN}, a DPP P defines a probability measure on the set of all subsets of Y, 2Y. Kulesza and Taskar (2012c) restricted their discussion of DDPs to L-ensembles, where the probability of a subset Y ∈Y is defined through a positive semi-definite matrix L indexed by the elements of Y: PL(Y = Y ) = det(LY ) P Y ⊆Y det(LY ) = det(LY ) det(L + I) (1) Where I is the N × N identity matrix and det(Lφ) = 1. Since L is positive semi-definite, it can be decomposed to L = BT B. This allows the construction of an intuitively interpretable model where each column Bi is the product of a quality term qi ∈R+ and a vector of (normalized) diversity features φi ∈RD, ||φi|| = 1. In this model, qi measures an inherent quality of the i −th item in Y while φT i φj ∈[−1, 1] is a similarity measure between items i and j. With this representation we can write: Lij = qiφT i φjqj (2) Sij = φT i φj = Lij p LiiLjj (3) PL(Y = Y ) ∝( Y i∈Y q2 i )det(SY ) (4) It can be shown that the first term in equation 4 increases with the quality of the selected items, and the second term increases with their diversity. As a consequence, this distribution places most of its weight on sets that are both high quality and diverse. Although the number of possible realizations of Y is exponential in N, many inference procedures can be performed accurately and efficiently (i.e. in polynomial time which is very short in practice). In particular, sampling, which NP-hard for alternative models such as Markov Random Fields (MRFs), is efficient, theoretically and practically, for DPPs. 3.2 Constructing a Joint Kernel Matrix DPPs are particularly suitable for joint modeling as they come with various simple and intuitive ways to combine individual model kernel matrices into a joint kernel. This stems from the fact that every positive-semidefinite matrix forms a legal DPP kernel (equation 1). Given individual model DPP kernels, we would therefore like to combine them into a positive-semidefinite matrix. While there are various ways to construct a positive-semidefinite matrix from two positivesemidefinite matrices – for example, by taking their sum – in this work we are motivated by the product of experts approach (Hinton, 2002), reasoning that high quality assignments according to a product of models have to be of high quality according to each individual model, and sick for a product combination. 2 In practice we construct the joint kernel in the following way. We build on the aforementioned property that a matrix L is positive semi-definite iff L = BT B. Given two DPPs, PL1 defined by L1 = AT 1 A1 and PL2 defined by L2 = AT 2 A2, we construct the joint kernel L12: L12 = L1L2L2L1 = CT C (5) Where C = AT 2 A2AT 1 A1 and CT = AT 1 A1AT 2 A2. 3.3 Clustering Algorithm Algorithm (1) and Figure (1) provide a pseudocode of the algorithm and an example output. Below is a detailed description. Features Our algorithm builds two DPP kernel matrices (the GenKernelMatrix function), in which the rows and columns correspond to the verbs in the data set, such that the (i, j)-th entry corresponds to verbs number i and j. Following equations 2 and 3 one matrix is built for SCF and one for SP, and they are then combined into the 2Note that we do not take a product of the individual models but only of their kernel matrices. Yet, if we construct the joint matrix by a multiplication then it follows from a simple generalization of the Cauchy-Binet formula that its principle minors, which define the subset probabilities (equation 1), are a sum of multiplications of the principle minors of the individual model kernels. Still, we do not have guarantees that our choice of kernel combination is the right one. We leave this for future research. 865 joint kernel matrix (the GenJointMat function) following equation 5. Each kernel matrix requires a proper feature representation φ and quality score q. In both kernels we represent a verb by the counts of the grammatical relations (GRs) it participates in. In the SCF kernel a GR is represented by the GR type and the POS tags of the verb and its arguments. In the SP kernels the GRs are represented by the POS tags of the verb and its arguments as well as by the argument head word. Based on this feature representation, the similarity (opposite divergence) is encoded to the model by equation 3 as the dot product between the normalized feature vectors. The quality score qi of the i-th verb is the average similarity of this verb with the other verbs in the dataset. Cluster set construction In its while loop, the algorithm iteratively generates fixed-size cluster sets such that each data point belongs to exactly one cluster in one set. These cluster sets form the leaf level of the tree in Figure (1). It does so by extracting the T highest probability K-point samples from a set of M subsets, each of which sampled from the joint DPP model, and clustering them by the cluster procedure. The sampling is done by the K-DPP sampling process ((Kulesza and Taskar, 2012c), page 62) 3. The cluster procedure first seeds a K-cluster set with the highest probability sample. Then, it gradually extends the clusters by iteratively mapping the samples, in decreasing order of probability, to the existing clusters (the m1Mapping function). Mapping is done by attaching every point in the mapped subset to its closet cluster, where the distance between a point and the cluster is the maximum over the distances between the point and each of the points in the cluster. The mapping is many-to-one, that is, multiple points in the subset can be assigned to the same cluster. Based on the DPP properties, the higher the probability of a sampled subset, the more likely it is to consist of distinct points that provide a good coverage of the verb set. By iteratively extending the clusters with high probability subsets, we thus expect each cluster set to consist of clusters that demonstrate these properties. 3K-DPP is a DPP conditioned on the sample size. As shown in ((Kulesza and Taskar, 2012c), Section 2.4.3) this conditional distribution is also a DPP. We could have obtained samples of size K by sampling the DPP and rejecting samples of other sizes but this would have been slower. SET 1-2-3-4 (45,K) SET 1-2 (23,K) SET1 (12,K) SET2 (11,K) SET3-4(22,K) SET 3 (12,K) SET4 (10,K) Figure 1: An example output hierarchy of DPPCluster for a set of 45 data points. Each set is augmented with the number of points (left number) and clusters (right number) it includes. The iterative DPP-samples clustering (the While loop) generates the lowest level of the tree, by dividing the data set into cluster sets, each of which consists of K clusters. Each point in the data set belongs to exactly one cluster in exactly one set. The agglomerative clustering then iteratively combines cluster sets such that in each iteration two sets are combined to one set with K clusters. Agglomerative Clustering Finally, the AgglomerativeClustering function builds a hierarchy of cluster sets, by iteratively combining cluster set pairs. In each iteration it computes the similarity between any such pair, defined to be the lowest similarity between their cluster members, which is in turn defined to be the lowest cosine similarity between their point members. The most similar cluster sets are combined such that each of the clusters in one set is mapped to its most similar cluster in the other set. In this step the algorithm generates data partitions at different granularity levels from finest (from the iterative sampling step) to the coarsest set (generated by the last agglomerative clustering iteration and consisting of exactly K clusters). This property is useful as the optimal level of generalization may be task dependent. 4 Evaluation Data sets and gold standards We evaluated the SCFs and verb clusters on gold standard datasets. We based our set of the largest available joint set for SCFs and VCs - that of (de Cruys et al., 2012). It provides SCF annotations for 183 verbs (an average of 12.3 SCF types per verb) obtained by annotating 250 corpus occurrences per verb with the SCF types of (de Cruys et al., 2012). The verbs represent a range of Levin classes at the top level of the hierarchy in VerbNet (Kipper-Schuler, 2005). Where a verb has more than one VerbNet class, we assign it to the one supported by the highest number of member verbs. To ensure suf866 |C| = 20, 21.6 |C| = 40, 41 |C| = 60, 58.6 |C| = 69, 77.6 |C| = 89, 97.4 Model R P F R P F R P F R P F R P F DPP-cluster 93.1 17.3 29.3 77.9 25.4 38.3 63 31.9 42.3 43.8 33.6 38.1 34.4 40.6 37.2 AC 67 17.8 28.2 46.6 24 31.7 40.5 29.4 34 33 34.9 33.9 24.7 41.1 30.9 SC 32.1 27.5 29.6 26.6 35.9 30.6 23.7 41.5 30.2 22.8 43.6 29.9 21.6 48.7 29.9 Table 1: Verb clustering evaluation for the last five iterations of our DPP-cluster model and the baseline agglomerative clustering algorithm (AC, see text for its description), and for the spectral clustering (SC) algorithm of (Sun and Korhonen, 2009) with the same number of clusters induced by DPP-cluster. |C| is the number of clusters for DPP-cluster and SC (first number) and for AC (second number). The F-score performance of DPP-cluster is superior in 4 out of 5 cases. Arg. per verb P (DPP) P(AC) P (B) P (NF) R (DPP) R (AC) R (B) R(NF) ERR DPP ERR AC ERR B ≤200 (133 verbs) 27.3 23.7 27.3 23.1 9.9 7.6 8 11.3 3.4 0.16 1.55 ≤600 (205 verbs) 26.5 25 27.3 22.6 14.8 11.5 11.9 16.6 2.3 0.50 1.1 ≤1000 (238 verbs) 24.6 23.6 25.6 21.1 17.5 13.8 14.7 19.8 1.6 0.42 0.95 Table 2: Performance of the Corpus Statistics SP baseline (non-filtered, NF) as well as for three filtering methods: frequency based (filter-baseline, B), DPP-cluster based (DPP) and AC cluster based (AC). P (method) and R (method) present the precision and recall of the method respectively. The error reduction ratio (ERR) is the ratio between the reduction in precision error achieved by each method and the increase in recall error (each method is compared to the NF baseline). Ratio greater than 1 means that the reduction in precision error is larger than the increase in recall error (see text for exact definition). DPP based filtering provides substantially better ratio. ficient representation of each class, we collected from VerbNet the verbs for which at least one of the possible classes is represented in the 183 verbs set by at least one and at most seven verbs. This yielded 101 additional verbs which we added to the gold standard with the initial 183 verbs. We parsed the BNC corpus with the RASP parser (Briscoe et al., 2006) and used it for feature extraction. Since 176 out of the 183 initial verbs are represented in this corpus, our final gold standard consists of 34 classes containing 277 verbs, of which 176 have SCF gold standard and has been evaluated for this task. We set the parameters of our algorithm on an held-out data, consisting of different verbs than those used in our experiments, to be M = 10000, K = 20 and T = 10. Clustering Evaluation We first evaluate the quality of the clusters induced by our algorithm (DPP-cluster) compared to the gold standard VCs (table 1). To evaluate the importance of the DPP component, we compare to the performance of a version of our algorithm where everything is kept fixed except from the sampling which is done from a uniform distribution rather than from the DPP joint kernel (this model is denoted in the table with AC for agglomerative clustering) 4. We also compare to the state-of-the-art spectral clustering method of Sun and Korhonen (2009) where our 4Importantly, the kernel matrix L used in the agglomerative clustering process is also used by AC. kernel matrix is used for the distance between data points (SC) 5. We evaluated the unified cluster set induced in each iteration of our algorithm and of the AC baseline and induced the same number of clusters as in each iteration of our algorithm using the SC baseline. Since the number of clusters in each iteration is not an argument for our algorithm or for the AC baseline, the number of clusters slightly differ between the two. The AC and SC baseline results were averaged over 5 and 100 runs respectively. DPP-cluster has produced identical output across runs. The table demonstrates the superiority of the DPP-cluster model. For four out of five conditions its F-score performance outperforms the baselines by 4.2-8.3%. Moreover, in all conditions its recall performances are substantially higher than those of the baselines (by 9.7-26.1%). Note that DPPcluster runs for 17 iterations while the AC baseline performs only 6. We therefore evaluated only the last 5 iterations of each model 6. SCF evaluation For this evaluation, we first built a baseline SCF lexicon based on the parsed 5Sun and Korhonen (2009) report better results than those we report for their algorithm (on a different data set). Note, however, that they used the output of a rule-based SCF system as a source of features, as opposed to our unsupervised approach. 6For the additional comparable iteration the result pattern is very similar to the (C = 89, 97.4) case in the table, and is not presented due to space limitations. 867 Algorithm 1 The DPP-cluster clustering algorithm. K is the size of the sampled subsets, M is the number of subsets sampled at each iteration, Y is the verb set, T is the number of most probable samples to be used in each iteration Algorithm DPP-cluster : Arguments: K,M,Y,T Return: cluster sets S = {S1, . . . Sn} i ←1 S ←∅ while Y ̸= ∅do (L1, S1) ←GenKernelMatrix(Y, SCF) (L2, S2) ←GenKernelMatrix(Y, SP) (L12, S12) ←GenJointMat(L1, L2) samples ←sampleDpp(L, K, M) topSamples ←exTop(samples, T) Si ←cluster(topSamples, L) Y ←Y −elements(Si) S ←S ∪Si i ←i + 1 end while AgglomerativeClustering(S) ——————————————————— ——– Function cluster : Arguments: topSamples,L Return: S S ←∅, topSample ←∅ i ←1 while (topSample ∩elements(S) = ∅) do topSample ←topSamples(i) S ←m1Mapping(topSample, S) i ←i + 1 if (i > size(topSamples)) then return S end if end while BNC corpus. We do this by gathering the GR combinations for each of the verbs in our gold standard, assuming they are frames and gathering their frequencies. Note that this corpus statistics baseline is a very strong baseline that performs very similarly to (de Cruys et al., 2012), the best unsupervised SCF model we are aware of, when run on their dataset 7. As shown in table 3 the corpus statistics baseline achieves high recall (84%) at the cost of low precision (52.5%) (similar pattern has been 7personal communication with the authors. demonstrated for the system of de Cruys et al. (2012)). On the other extreme, two other commonly used baselines strongly prefer precision. These are the Most Frequent SCF (O’Donovan et al., 2005) which uniformly assigns to all verbs the two most frequent SCFs in general language, transitive (SUBJ-DOBJ) and intransitive (SUBJ) (and results in poor F-score), and a filtering that removes frames with low corpus frequencies (which results in low recall even when trying to provide the maximum recall for a given precision level). The task we address is therefore to improve the precision of the corpus statistics baseline in a way that does not substantially harm the F-score. To remedy this imbalance, we apply a cluster based filtering method on top of the maximumrecall frequency filter. This filter excludes a candidate frame from a verb’s lexicon only if it meets the frequency filter criterion and appears in no more than N other members of the cluster of the verb in question. The filter utilizes the clustering produced by the seventh to last iteration of DPPcluster that contains seven clusters with approximately 30 members each. Such clustering should provide a good generalization level for the task. We report results for moderate as well as aggressive filtering (N = 3 and N = 7 respectively). Table 3 clearly demonstrates that cluster based filtering (DPP-cluster and AC) is the only method that provides a good balance between the recall and the precision of the SCF lexicon. Moreover, the lexicon induced by this method includes a substantially higher number of frames per verb compared to the other filtering methods. While both AC and DPP-cluster still prefer recall to precision, DPP-cluster does so to a smaller extent 8. This clearly demonstrates that the clustering serves to provide SCF acquisition with semantic information needed for improved performance. SP evaluation We explore a variant of the pseudo-disambiguation task of Rooth et al. (1999) which has been applied to SP acquisition by a number of recent papers (e.g. (de Cruys et al., 2012)). Rooth et al. (1999) proposed to judge which of two verbs v and ˜v is more likely to take a given noun n as its argument. In their experiments the model has to choose between a pair (v, n) that 8We show results for the maximum recall frequency filtering with precision equals to 80 or 90. When the frequency threshold is further reduced from 0.03, the same result pattern hold. We do not give a detailed description due to space limitations. 868 Corpus Statistics: [P = 52.5, R = 84, F = 64.6, AF = 12.3] Most Frequent SCF: [P = 86.7, R = 22.5, F = 35.8, AF = 2] Clustering Moderate Clustering Aggressive Maximum Recall Frequency Threshold Model P R F AF P R F AF threshold = 0.03, Prec. > 80 DPP-cluster 60.8 68.3 64.3 8.7 64.1 64.2 64.2 7.7 [P=88.7,R=52.4,F=65.9,AF=4.5] AC 58 73.2 64.6 9.7 61.3 68.9 64.7 8.6 threshold = 0.05, Prec. > 90 DPP-cluster 60.1 64.6 62.3 8.7 63.3 59.3 61.3 7.2 [P=92.3,R=44.4,F=59.9,AF=3.7] AC 57.5 70.6 63.2 9.4 60.7 65.4 62.7 8.3 Table 3: SCF Results for the DPP-cluster model compared to the Corpus Statistics baseline, Most Frequent SCF baseline, maximum-recall frequency thresholding with the maximum threshold values that keep precision above 80 (threshold = 0.03) and above 90 (threshold = 0.05), and the AC clustering baseline. AF is the average number of frames per verb. All methods except from cluster based filtering (DPP-cluster and AC) induce lexicons with strong recall/precision imbalance. Cluster based filtering keeps a larger number of frames in the lexicon compared to the frequency thresholding baseline, while keeping similar F-score levels. DPP-cluster provides better recall/precision balance than AC. appears only in the test corpus and a pair (˜v, n) that appears neither in the test nor in the training corpus. Note, however, that this test only evaluates the capability of a model to distinguish a correct unseen verb-argument pair from an incorrect one, but not its capability to identify erroneous pairs when no alternative pair is presented. This last property can strongly affect the precision of the model. We therefore propose to measure both aspects of the SP task by computing both the recall and the precision between the list of possible arguments a verb can take according to the model and the corresponding test corpus list 9. We evaluate the value of our clustering for SP acquisition in the particularly challenging scenario of domain adaptation. For each of the verbs in our set we induce a list of possible noun direct objects from the BNC corpus and an equivalent list from the North American News Text (NANT) corpus. Following previous work (e.g. (de Cruys et al., 2012)) arguments are identified using a parser (RASP in our case). Using the verb clusters we create a filtered version of the BNC argument lexicon which includes in the noun argument list of a verb only those nouns that appear in the BNC as arguments of that verb and of one of its cluster members. For each verb we then compare the filtered as well as the non-filtered BNC induced lexicon to the NANT lexicon by computing the average recall and precision between the argument lists 9In principle these measures can take into account the probability assigned by the model to each argument and the corresponding test corpus frequency. In this work we compute probability-ignorant scores and keep more sophisticated evaluations for future research. and then report the average scores across all verbs. We compare to a baseline which maintains only noun arguments that appear at least twice in BNC 10. As a final measure of performance we compute the ratio between the reduction in precision error (i.e. pmodel−pbaseline 100−pbaseline ) and the increase in recall error (rbaseline−rmodel 100−rmodel ). Table 2 presents the results for verbs with up to 200, 600 and 1000 noun arguments in the training data. In all cases, the relative error reduction of the DPP cluster filter is substantially higher than that of the frequency baseline. Note that for this task the baseline AC clusters are of low quality which is reflects by an error reduction ratio of up to 0.5. 5 Conclusions and Future Work In this paper we have presented the first unified framework for the induction of verb clusters, subcategorization frames and selectional preferences from corpus data. Our key idea is to cluster together verbs with similar SCFs and SPs and to use the resulting clusters for SCF and SP induction. To implement our idea we presented a novel method which involves constructing a product DPP model for SCFs and SPs and introduced a new algorithm that utilizes the efficient DPP sampling algorithms to cluster together verbs with similar SCFs and SPs. The induced clusters performed well in evaluation against a VerbNet -based gold standard and proved useful in improving the quality of SCFs and SPs over strong baselines. Our results demonstrate the benefits of a unified framework for acquiring lexical informa10we experimented with other threshold values for this baseline but the recall in those case becomes very low. 869 tion about different aspects of verbal predicateargument structure. Not only the acquisition of different types information (syntactic and semantic) can support and inform each other, but also a unified framework can be useful for NLP tasks and applications which require rich information about predicate-argument structure. In future work we plan to apply our approach on larger scale data sets and gold standards and to evaluate it in different domains, languages and in the context of NLP tasks such as syntactic parsing and SRL. In addition, in our current framework SCF and SP information is used for clustering which is in turn used to improve SCF and SP quality. At this stage no further information flows from the SCF and SP models to the clustering model. A natural extension of our unified framework is to construct a joint model in which the predictions for all three tasks inform each other at all stages of the prediction process. Acknowledgements The work in this paper was funded by the Royal Society University Research Fellowship (UK). References Ivana Romina Altamirano and Laura Alonso i Alemany. 2010. IRASubcat, a highly customizable, language independent tool for the acquisition of verbal subcategorization information from corpus. In Proceedings of the NAACL HLT 2010 Young Investigators Workshop on Computational Approaches to Languages of the Americas. Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The berkeley framenet project. In COLINGACL-98. Roberto Basili, Diego De Cao, Paolo Marocco, and Marco Pennacchiotti. 2007. Learning selectional preferences for entailment or paraphrasing rules. In RANLP 2007, Borovets, Bulgaria. Rahul Bhagat, Patrick Pantel, and Eduard Hovy. 2007. Ledir: An unsupervised algorithm for learning directionality of inference rules. In EMNLP-07, page 161170, Prague, Czech Republic. Akshar Bharati, Sriram Venkatapathy, and Prashanth Reddy. 2005. Inferring semantic roles using subcategorization frames and maximum entropy model. In CoNLL-05. Ted Briscoe and John Carroll. 1997. Automatic extraction of subcategorization from corpora. In ANLP97. E.J. Briscoe, J. Carroll, and R. Watson. 2006. The second relsease of the rasp system. In COLING/ACL interactive presentation session. Glenn Carroll and Mats Rooth. 1996. Valence induction with a head-lexicalized pcfg. In EMNLP-96. Paula Chesley and Susanne Salmon-Alt. 2006. Automatic extraction of subcategorization frames for french. In LREC-06. Kostadin Cholakov and Gertjan van Noord. 2010. Using unknown word techniques to learn known words. In EMNLP-10. Hoa Trang Dang. 2004. Investigations into the Role of Lexical Semantics in Word Sense Disambiguation. Ph.D. thesis, CIS, University of Pennsylvania. Tim Van de Cruys, Laura Rimell, Thierry Poibeau, and Anna Korhonen. 2012. Multi-way tensor factorization for unsupervised lexical acquisition. In COLING-12. Lukasz Dkebowski. 2009. Valence extraction using EM selection and co-occurrence matrices. Language resources and evaluation, 43(4):301–327. Katrin Erk. 2007. A simple, similarity-based model for selectional preferences. In ACL 2007, Prague, Czech Republic. J. Gillenwater, A. Kulesza, and B. Taskar. 2012. Discovering diverse and salient threads in document collections. In EMNLP-12. Ralph Grishman, Catherine Macleod, and Adam Meyers. 1994. Comlex syntax: Building a computational lexicon. In COLNIG-94. G.E. Hinton. 2002. Training products of experts by minimizing contrastive divergence. Neural Computation, 14:1771–1800. Eduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. Ontonotes: the 90% solution. In Porceedings 0f NAACL-HLT-06 short papers. Dino Ienco, Serena Villata, and Cristina Bosco. 2008. Automatic extraction of subcategorization frames for italian. In LREC-08. Eric Joanis, Suzanne Stevenson, and David James. 2008. A general feature space for automatic verb classification. Natural Language Engineering. Daisuke Kawahara and Sadao Kurohashi. 2010. Acquiring reliable predicate-argument structures from raw corpora for case frame compilation. In LREC10. Karin Kipper-Schuler. 2005. VerbNet: A broadcoverage, comprehensive verb lexicon. Ph.D. thesis, University of Pennsylvania, Philadelphia, PA, June. 870 Anna Korhonen, Yuval Krymolowski, and Nigel Collier. 2008. The choice of features for classification of verbs in biomedical texts. In Proceddings of COLING-08. Anna Korhonen. 2002. Semantically motivated subcategorization acquisition. In Proceedings of the ACL-02 workshop on Unsupervised lexical acquisition-Volume 9. A. Kulesza and B. Taskar. 2010. Structured determinantal point processes. In NIPS-10. A. Kulesza and B. Taskar. 2012a. k-dpps: fixed-size determinantal point processes. In ICML-11. A. Kulesza and B. Taskar. 2012b. Learning determinantal point processes. In UAI-12. Alex Kulesza and Ben Taskar. 2012c. Determinantal point processes for machine learning. In arXiv:1207.6083. A. Kulesza. 2012. Learning with determinantal point processes. Ph.D. thesis, CIS, University of Pennsylvania. Alessandro Lenci, Barbara McGillivray, Simonetta Montemagni, and Vito Pirrelli. 2008. Unsupervised acquisition of verb subcategorization frames from shallow-parsed corpora. In LREC-08. Beth Levin. 1993. English verb classes and alternations: A preliminary investigation. Chicago, IL. Jianguo Li and Chris Brew. 2008. Which are the best features for automatic verb classification. In ACL08. Tom Lippincott, Anna Korhonen, and Diarmuid ´O S´eaghdha. 2012. Learning syntactic verb frames using graphical models. In ACL-12, Jeju, Korea. C´edric Messiant, Anna Korhonen, and Thierry Poibeau. 2008. LexSchem: A large subcategorization lexicon for French verbs. In LREC-08. George A. Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39–41. Alessandro Moschitti and Roberto Basili. 2005. Verb subcategorization kernels for automatic semantic labeling. In Proceedings of the ACL-SIGLEX Workshop on Deep Lexical Acquisition. Ruth O’Donovan, Michael Burke, Aoife Cahill, Josef van Genabith, and Andy Way. 2005. Large-scale induction and evaluation of lexical resources from the penn-ii and penn-iii treebanks. Computational Linguistics, 31:328–365. Diarmuid ´O S´eaghdha and Anna Korhonen. 2011. Probabilistic models of similarity in syntactic context. In EMNLP-11, Edinburgh, UK. Diarmuid ´O S´eaghdha. 2010. Latent variable models of selectional preference. In ACL-10, Uppsala, Sweden. Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The proposition bank: An annotated corpus of semantic roles. Computational Linguistics, 31(1):71–106. Judita Preiss, Ted Briscoe, and Anna Korhonen. 2007. A system for large-scale acquisition of verbal, nominal and adjectival subcategorization frames from corpora. In ACL-07. Joseph Reisinger and Raymond Mooney. 2011. Crosscutting models of lexical semantics. In EMNLP-11, Edinburgh, UK. Alan Ritter and Oren Etzioni. 2010. A latent dirichlet allocation method for selectional preferences. In ACL-10. Mats Rooth, Stefan Riezler, Detlef Prescher, Glenn Carroll, and Franz Beil. 1999. Inducing a semantically annotated lexicon via em-based clustering. In ACL-99. Karin Kipper Schuler. 2006. VerbNet: A BroadCoverage, Comprehensive Verb Lexicon. Ph.D. thesis, University of Pennsylvania. S. Schulte im Walde, C. Hying, C. Scheible, and H. Schmid. 2008. Combining EM training and the MDL principle for an automatic verb classification incorporating selectional preferences. In ACL-08, pages 496–504. Sabine Schulte im Walde. 2006. Experiments on the automatic induction of german semantic verb classes. Computational Linguistics, 32(2):159–194. Lei Shi and Rada Mihalcea. 2005. Putting pieces together: Combining framenet, verbnet and wordnet for robust semantic parsing. In CICLING-05. Lin Sun and Anna Korhonen. 2009. Improving verb clustering with automatically acquired selectional preferences. In EMNLP-09, Singapore. Lin Sun and Anna Korhonen. 2011. Hierarchical verb clustering using graph factorization. In EMNLP-11. Lin Sun, Anna Korhonen, and Yuval Krymolowski. 2008. Verb class discovery from rich syntactic data. Lecture Notes in Computer Science, 4919(16). Robert Swier and Suzanne Stevenson. 2004. Unsupervised semantic role labelling. In EMNLP-04. Stefan Thater, Hagen Furstenau, and Manfred Pinkal. 2010. Contextualizing semantic representations using syntactically enriched vector models. In ACL10, Uppsala, Sweden. Tim Van de Cruys. 2009. A non-negative tensor factorization model for selectional preference induction. In Proceedings of the workshop on Geometric Models for Natural Language Semantics (GEMS). 871 Andreas Vlachos, Anna Korhonen, and Zoubin Ghahramani. 2009. Unsupervised and constrained dirichlet process mixture models for verb clustering. In Proceedings of the Workshop on Geometrical Models of Natural Language Semantics. 2008. Robustness and generalization of role sets: PropBank vs. VerbNet. Benat Zapirain, Eneko Agirre, and Lluis Marquex. 2009. Generalizing over lexical features: Selectional preferences for semantic role classification. In ACL-IJCNLP-09, Singapore. Guangyou Zhou, Jun Zhao, Kang Liu, and Li Cai. 2011. Exploiting web-derived selectional preference to improve statistical dependency parsing. In ACL-11, Portland, OR. 872
2013
85
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 873–883, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Semantic Frames to Predict Stock Price Movement Boyi Xie, Rebecca J. Passonneau, Leon Wu Center for Computational Learning Systems Columbia University New York, NY USA (bx2109|becky|leon.wu)@columbia.edu Germ´an G. Creamer Howe School of Technology Management Stevens Institute of Technology Hoboken, NJ USA [email protected] Abstract Semantic frames are a rich linguistic resource. There has been much work on semantic frame parsers, but less that applies them to general NLP problems. We address a task to predict change in stock price from financial news. Semantic frames help to generalize from specific sentences to scenarios, and to detect the (positive or negative) roles of specific companies. We introduce a novel tree representation, and use it to train predictive models with tree kernels using support vector machines. Our experiments test multiple text representations on two binary classification tasks, change of price and polarity. Experiments show that features derived from semantic frame parsing have significantly better performance across years on the polarity task. 1 Introduction A growing literature evaluates the financial effects of media on the market (Tetlock, 2007; Engelberg and Parsons, 2011). Recent work has applied NLP techniques to various financial media (conventional news, tweets) to detect sentiment in conventional news (Devitt and Ahmad, 2007; Haider and Mehrotra, 2011) or message boards (Chua et al., 2009), or discriminate expert from nonexpert investors in financial tweets (Bar-Haim et al., 2011). With the exception of Bar-Haim et al. (2011), these NLP studies have relied on small corpora of hand-labeled data for training or evaluation, and the connection to market events is done indirectly through sentiment detection. We hypothesize that conventional news can be used to predict changes in the stock price of specific companies, and that the semantic features that best represent relevant aspects of the news vary across On Wednesday, April 11th, 2012, Google Inc announced its first     quarterly earnings report, a week before the April 20 options contracts expiration in contrast to its history of reporting a day before monthly options expirations. The stock price of Google surged 3.85% from April 10th’s $626.86 to 12th’s $651.01. On Friday, April 13th, news reported Oracle Corp would sue     Google Inc , claiming Google’s Android operating system tramples     its intellectual property rights . Jury selection was set for the next Monday. Google’s stock price tumbled 4.06% on Friday, and continued to drop in the following week. Figure 1: Summary of financial news items pertaining to Google in April, 2012. market sectors. To test this hypothesis, we use price information to label data from six years of financial news. Our experiments test several document representations for two binary classification tasks, change of price and polarity. Our main contribution is a novel tree representation based on semantic frame parses that performs significantly better than enriched bag-of-words vectors. Figure 1 shows a constructed example based on extracts from financial news about Google in April, 2012. It illustrates how a series of events reported in the news precedes and potentially predicts a large change in Google’s stock price. Google’s early announcement of quarterly earnings possibly presages trouble, and its stock price falls soon after reports of a legal action against Google by Oracle. To produce a coherent story, the original sentences were edited for Figure 1, but they are in the style of actual sentences from our dataset. Accurate detection of events and relations that might have an impact on stock price should benefit from document representation that captures sentiment in lexical items (e.g., aggressive) combined with the conceptual relations captured by FrameNet (Ruppenhofer and Rehbein, 2012). A frame is a lexical semantic representa873 tion of the conceptual roles played by parts of a clause, and relates different lexical items (e.g., report, announce) to the same situation type. In the figure, some of the words that evoke frames have been underlined, and role fillers are outlined by boxes or ovals. Sentiment words are in italics. To the best of our knowledge, this paper is the first to apply semantic frames in this domain. On the polarity task, the semantic frame features encoded as trees perform significantly better across years and sectors than bag-of-words vectors (BOW), and outperform BOW vectors enhanced with semantic frame features, and a supervised topic modeling approach. The results on the price change task show the same trend, but are not statistically significant, possibly due to the volatility of the market in 2007 and the following several years. Yet even modest predictive performance on both tasks could have an impact, as discussed below, if incorporated into financial models such as Rydberg and Shephard (2003). We first discuss the motivation and related work. Section 4 presents vector-based and tree-based features from semantic frame parses, and section 5 describes our dataset. The experimental design and results appear in the following section, followed by discussion and conclusions. 2 Motivation Financial news is a rich vein for NLP applications to mine. Many news organizations that feature financial news, such as Reuters, the Wall Street Journal and Bloomberg, devote significant resources to the analysis of corporate news. Much of the data that would support studies of a link between the news media and the market are publicly available. As pointed out by Tetlock et al. (2008), linguistic communication is a potentially important source of information about firms’ fundamental values. Because very few stock market investors directly observe firms’ production activities, they get most of their information secondhand. Their three main sources are analysts’ forecasts, quantifiable publicly disclosed accounting variables, and descriptions of firms’ current and future profit-generating activities. If analyst and accounting variables are incomplete or biased measures of firms’ fundamental values, linguistic variables may have incremental explanatory power for firms’ future earnings and returns. Consider the following sentences: Oracle sued Google in August 2010, saying Google’s Android mobile operating system infringes its copyrights and patents for the Java programming language. (a) Oracle has accused Google of violating its intellectual property rights to the Java programming language. (b) Oracle has blamed Google and alleged that the latter has committed copyright infringement related to Java programming language held by Oracle. (c) Oracle’s Ellison says couldn’t sway Google on Java. (d) Sentences a, b and c are semantically similar, but lexically rather distinct: the shared words are the company names and Java (programming language). Bag-of-Words (BOW) document representation is difficult to surpass for many document classification tasks, but cannot capture the degree of semantic similarity among these sentences. Methods that have proven successful for paraphrase detection (Deerwester et al., 1990; Dolan et al., 2004), as in the main clauses of b and c, include latent variable models that simultaneously capture the semantics of words and sentences, such as latent semantic analysis (LSA) or latent Dirichlet allocation (LDA). However, our task goes beyond paraphrase detection. The first three sentences all indicate an adversarial relation of Oracle to Google involving a negative judgement. It would be useful to capture the similarities among all three of these sentences, and to distinguish the role of each company (who is suing and who is being sued). Further, these three sentences potentially have a greater impact on market perception of Google in contrast to a sentence like d, that refers to the same conflict more indirectly, and whose main clause verb is say. We hypothesize that semantic frames can address these issues. Most of the NLP literature on semantic frames addresses how to build robust semantic frame parsers, with intrinsic evaluation against gold standard parses. There have been few applications of semantic frame parsing for extrinsic tasks. To test for measurable benefits of semantic frame parsing, this paper poses the following questions: 1. Are semantic frames useful for document representation of financial news? 2. What aspects of frames are most useful? 3. What is the relative performance of document representation that relies on frames? 874 4. What improvements could be made to best exploit semantic frames? Our work is not aimed at investment profit. Rather, we investigate whether computational linguistic methodologies can improve our understanding of a company’s fundamental market value, and whether linguistic information derived from news produces a consistent enough result to benefit more comprehensive financial models. 3 Related Work NLP has recently been applied to financial text for market analysis, primarily using bag-ofwords (BOW) document representation. Luss and d’Aspremont (2008) use text classification to model price movements of financial assets on a per-day basis. They try to predict the direction of return, and abnormal returns, defined as an absolute return greater than a predefined threshold. Kogan et al. (2009) address a text regression problem to predict the financial risk of investment in companies. They analyze 10-K reports to predict stock return volatility. They also predict whether a company will be delisted following its 10-K report. Ruiz et al. (2012) correlate text with financial time series volume and price data. They find that graph centrality measures like page rank and degree are more strongly correlated to both price and traded volume for an aggregation of similar companies, while individual stocks are less correlated. Lavrenko et al. (2000) present an approach to identify news stories that influence the behavior of financial markets, and predict trends in stock prices based on the content of news stories that precede the trends. Luss and d’Aspremont (2008) and Lavrenko et al. (2000) both point out the desire for document feature engineering as future research directions. We explore a rich feature space that relies on frame semantic parsing. Sentiment analysis figures strongly in NLP work on news. General Inquirer (GI), a content analysis program, is used to quantify pessimism of news in Tetlock (2007) and Tetlock et al. (2008). Other resources for sentiment detection include the Dictionary of Affect in Language (DAL) to score the prior polarity of words, as in Agarwal et al. (2011) on social media data. Our study incorporates DAL scores along with other features. FrameNet is a rich lexical resource (Fillmore et al., 2003), based on the theory of frame semantics (Fillmore, 1976). There is active research Category Features Value type Frame F, FT, FE N attributes wF, wFT, wFE R≥0 BOW UniG, BiG, TriG N wUniG, wBiG, wTriG R≥0 pDAL all-Pls, all-Act, all-Img R∼µ=0,std=1 VB-Pls, VB-Act, VB-Img R∼µ=0,std=1 JJ-Pls, JJ-Act, JJ-Img R∼µ=0,std=1 RB-Pls, RB-Act, RB-Img R∼µ=0,std=1 Table 1: FWD features (Frame, bag-of-Words, part-of-speech DAL score) and their value types. to build more accurate parsers (Das and Smith, 2011; Das and Smith, 2012). Semantic role labeling using FrameNet has been used to identify an opinion with its holder and topic (Kim and Hovy, 2006). For deep representation of sentiment analysis, Ruppenhofer and Rehbein (2012) propose SentiFrameNet. Our work addresses classification tasks that have potential relevance to an influential financial model (Rydberg and Shephard, 2003). This model decomposes stock price analysis of financial data into a three-part ADS model - activity (a binary process modeling the price move or not), direction (another binary process modeling the direction of the moves) and size (a number quantifying the size of the moves). Our two binary classification tasks for news, price change and polarity, are analogous to their activity and direction. In contrast to the ADS model, our approach does not calculate the conditional probability of each factor. At present, our goal is limited to the determination of whether NLP features can uncover information from news that could help predict stock price movement or support analysts’ investigations. 4 Methods We propose two approaches for the use of semantic frames. The first is a rich vector space based on semantic frames, word forms and DAL affect scores. The second is a tree representation that encodes semantic frame features, and depends on tree kernel measures for support vector machine classification. The semantic parses of both methods are derived from SEMAFOR1 (Das and Smith, 2012; Das and Smith, 2011), which solves the semantic parsing problem by rule-based target identification, log-linear model based frame identification and frame element filling. 1http://www.ark.cs.cmu.edu/SEMAFOR. 875 Frame (F) Judgment comm. Commerce buy accuse buy Target (FT) sue purchase charge bid Frame COMMUNICATOR BUYER Element EVALUEE SELLER (FE) REASON GOODS Table 2: Sample frames. 4.1 Semantic Frame based FWD Features Table 1 lists 24 types of features, including semantic Frame attributes, bag-of-Words, and scores for words in the Dictionary of Affect in Language by part of speech (pDAL). We refer to these features as FWD features throughout the paper. FWD features are used alone and in combinations. FrameNet defines hundreds of frames, each of which represents a scenario associated with semantic roles, or frame elements, that serve as participants in the scenario the frame signifies. Table 2 shows two frames. The frame Judgment communication (JC or Judgment comm. in the rest of the paper) represents a scenario in which a COMMUNICATOR communicates a judgment of an EVALUEE for some REASON. It is evoked by (target) words such as accuse or sue. Here we use F for the frame name, FT for the target words, and FE for frame elements. We use both frequency and weighted scores. For example, we define idf-adjusted weighted frame features, such as wF for attribute F in document d as wFF,d = f(F, d) × log |D| |d∈D:F∈d|, where f(F, d) is the frequency of frame F in d, D is the whole document set and |·| is the cardinality operator. Bag-of-Words features include term frequency and tfidf of unigrams, bigrams, and trigrams. DAL (Dictionary of Affect in Language) is a psycholinguistic resource to measure the emotional meaning of words and texts (Whissel, 1989). It includes 8,742 words that were annotated for three dimensions: Pleasantness (Pls), Activation (Act), and Imagery (Img). Agarwal et al. (2009) introduced part-of-speech specific DAL features for sentiment analysis. We follow their approach by averaging the scores for all words, verb only, adjective only, and adverb only words. Feature values are normalized to mean of zero and standard deviation of one. 4.2 SemTree Feature Space and Kernels We propose SemTree as another feature space to encode semantic information in trees. SemTree can distinguish the roles of each company of interest, or designated object (e.g. who is suing and who is being sued). 4.2.1 Construction of Tree Representation The semantic frame parse of a sentence is a forest of trees, each of which corresponds to a semantic frame. SemTree encodes the original frame structure and its leaf words and phrases, and highlights a designated object at a particular node as follows. For each lexical item (target) that evokes a frame, a backbone is found by extracting the path from the root to the role filler mentioning a designated object; the backbone is then reversed to promote the designated object. If multiple frames have been assigned to the same designated object, their backbones are merged. Lastly, the frame elements and frame targets are inserted at the frame root. The top of Figure 2 shows the semantic parse for sentence a from section 2; we use it to illustrate tree construction for designated object Oracle. The parse has two frames (Figure 2-(1)&(2)), one corresponding to the main clause (verb sue), and the other for the tenseless adjunct (verb say). The reversed paths extracted from each frame root to the designated object Oracle become the backbones (Figures 2-(3)&(4)). After merging the two backbones we get the resulting SemTree, as shown in Figure 2-(5). By the same steps, this sentence would also yield a SemTree with Google at the root, in the role of EVALUEE. 4.2.2 Kernels and Tree Substructures The tree kernel (Moschitti, 2006; Collins and Duffy, 2002) is a function of tree similarity, based on common substructures (tree fragments). There are two types of substructures. A subtree (ST) is defined as any node of a tree along with all its descendants. A subset tree (SST) is defined as any node along with its immediate children and, optionally, part or all of the children’s descendants. Each tree is represented by a d dimensional vector where the i’th component counts the number of occurrences of the i’th tree fragment. Define the function hi(T) as the number of occurrences of the i’th tree fragment in tree T, so that T is now represented as h(T) = (h1(T), h2(T), ..., hd(T)). We define the set of nodes in tree T1 and T2 as NT1 and NT2 respectively. We define the indicator function Ii(n) to be 1 if subtree i is seen rooted at node n, and 0 otherwise. It follows that hi(T1) = P n1∈NT1 Ii(n1) 876 Designated object: Oracle (ORCL) Sentence: Oracle sued Google in August 2010, saying Google’s Android mobile operating system infringes its copyrights and patents for the Java programming language. SRL: [OracleJC.F E.Communicator,Stmt.F E.Speaker] [suedJC.T arget] [GoogleJC.F E.Evaluee] in August 2010, [sayingStmt.T arget] [Google´s Android mobile operating system infringes its copyrights and patents for the Java programming languageStmt.F E.Message]. (1) Judgment comm. FE.Evaluee GOOG FE.Communicator ORCL Judgment comm.Target sue (2) Statement FE.Message GOOG’s Android ... language FE.Speaker ORCL Statement.Target say (3) ORCL FE.Communicator Judgment comm. (4) ORCL FE.Speaker Statement (5) ORCL Speaker Statement FE.Message FE.Speaker Statement.Target say Communicator Judgment comm. FE.Evaluee FE.Communicator Judgment comm.Target sue Figure 2: Constructing a tree representation for the designated object Oracle in sentence shown. and hi(T2) = P n2∈NT2 Ii(n2). Their similarity can be efficiently computed by the inner product, K(T1, T2) = h(T1) · h(T2) = P i hi(T1)hi(T2) = P i(P n1∈NT1 Ii(n1))(P n2∈NT2 Ii(n2)) = P n1∈NT1 P n2∈NT2 P i Ii(ni)Ii(n2) = P n1∈NT1 P n2∈NT2 ∆(n1, n2) where ∆(n1, n2) is the number of common fragments rooted in the nodes n1 and n2. If the productions of these two nodes (themselves and their immediate children) differ, ∆(n1, n2) = 0; otherwise iterate their children recursively to evaluate ∆(n1, n2) = Q|children| j (σ+∆(cj n1, cj n2)) , where σ = 0 for ST kernel and σ = 1 for SST kernel. The kernel computational complexity is O(|NT1| × |NT2|), where all pairwise comparisons are carried out between T1 and T2. However, there are fast algorithms for kernel computation that run in linear time on average, either by dynamic programming (Collins and Duffy, 2002), or pre-sorting production rules before training (Moschitti, 2006). We use the latter. 5 Dataset We use publicly available financial news from Reuters from January 2007 through August 2012. This time frame includes a severe economic downturn in 2007-2010 followed by a modest recovery in 2011-2012. An information extraction pipeline is used to pre-process the data. News full text is extracted from HTML. The timestamp of the news is extracted for a later alignment with stock price information, which will be discussed in section 6. The company mentioned is identified by a rule-based matching of a finite list of companies. There are a total of 10 sectors in the Global Industry Classification Standard (GICS), an industry taxonomy used by the S&P 500.2 To explore our approach for this domain, we select three sectors for our experiment: Telecommunication Services (TS, the sector with the smallest number of companies), Information Technology (IT), and Consumer Staples (CS), due to our familiarity with the companies in these sectors and an expectation of different characteristics they may exhibit. In the expectation there would be semantic differences associated with these sectors, experiments are performed independently for each sector. There are also differences in the number of companies in the sector, and the amount of news. We bin news articles by sector. We remove articles that only list stock prices or only show tables of accounting reports. The first preprocessing step is to extract sentences that mention the 2Standard & Poor’s 500 is an equity market index that includes 500 U.S. leading companies in leading industries. 877 CS (N=40) IT (N=69) TS (N=8) avg # news 5,702±749 13446±1,272 2,177±188 avg # sentences 16,090±2,316 48,929±5,927 6,970±1,383 avg # com./sent. 1.07±0.01 1.06±0.20 1.14±0.03 avg # total 17,131±2,339 51,306±8,637 7,947±1,576 Table 3: Data statistics of mean and standard deviation by year from January 2007 to August 2012, for three sectors, with the number of companies. relevant companies. Each data instance is a sentence and one of the target companies it mentions. Table 3 summarizes the data statistics. For example, the consumer staples sector has 40 companies. It has an average of 5,702 news articles (16,090 sentences) per year. Each sentence that mentions a consumer staple company mentions 1.07 companies on average. On average, this sector has 17,131 instances per year. 6 Experiments Our current experiments are carried out for each year, training on one year and testing on the next. The choice to use a coarse time interval with no overlap was an expedience to permit more numerous exploratory experiments, given the computational resources these experiments require. We test the influence of news to predict (1) a change in stock price (change task), and (2) the polarity of change (increase vs. decrease; polarity task). Experiments evaluate the FWD and SemTree feature spaces compared to two baselines: bag-of-words (BOW) and supervised latent Dirichlet allocation (sLDA) (Blei and McAuliffe, 2007). BOW includes features of unigram, bigram and trigram. sLDA is a statistical model to classify documents based on LDA topic models, using labeled data. It has been applied to and shown good performance in topical text classification, collaborative filtering, and web page popularity prediction problems. 6.1 Labels, Evaluation Metrics, and Settings We align publicly available daily stock price data from Yahoo Finance with the Reuters news using a method to avoid back-casting. In particular, we use the daily adjusted closing price - the price quoted at the end of a trading day (4PM US Eastern Time), then adjusted by dividends, stock split, and other corporate actions. We create two types of labels for news documents using the price data, to label the existence of a change and the direction of change. Both tasks are treated as binary classification problems. Based on the finding of a one-day delay of the price response to the information embedded in the news by Tetlock et al. (2008), we use ∆t = 1 in our experiment. To constrain the number of parameters, we also use a threshold value (r) of a 2% change, based on the distribution of price changes across our data. In future work, this could be tuned to sector or time. change= ( +1 if |pt(0)+∆t−pt(−1)| pt(−1) > r −1 otherwise polarity=  +1 if pt(0)+∆t > pt(−1) and change = +1 −1 if pt(0)+∆t < pt(−1) and change = +1 pt(−1) is the adjusted closing price at the end of the last trading day, and pt(0)+∆t is the price of the end of the trading day after the ∆t day delay. Only the instances with changes are included in the polarity task. There is high variance across years in the proportion of positive labels, and often highly skewed classes in one direction or the other. The average ratios of +/- classes for change and polarity over the six years’ data are 0.73 (std=0.35) and 1.12 (std=0.25), respectively. Because the time frame for our experiments includes an economic crisis followed by a recovery period, we note that the ratio between increase and decrease of price flips between 2007, where it is 1.40, and 2008, where it is 0.71. Accuracy is very sensitive to skew: when a class has low frequency, accuracy can be high using a baseline that makes prediction on the majority class. Given the high data skew, and the large changes from year to year in positive versus negative skew, we use a more robust evaluation metric. Our evaluation relies on the Matthews correlation coefficient (MCC, also known as the φcoefficient) (Matthews, 1975) to avoid the bias of accuracy due to data skew, and to produce a robust summary score independent of whether the positive class is skewed to the majority or minority. In contrast to f-measure, which is a classspecific weighted average of precision and recall, and whose weighted version depends on a choice of whether the class-specific weights should come from the training or testing data, MCC is a single summary value that incorporates all 4 cells of a 2 × 2 confusion matrix (TP, FP, TN and FN for True or False Positive or Negative). We have also observed that MCC has a lower relative standard deviation than f-measure. For a 2 × 2 contingency table, MCC corresponds to the square root of the average χ2 statistic p χ2/n, with values in [-1,1]. It has been sug878 Change test years BOW sLDA FWD SemTreeFWD Consumer Staples 2008-2010 0.1015 0.0774 0.1079 0.1426 2011-2012 0.1663 0.1203 0.1664 0.1736 5 years 0.1274 0.0945 0.1313 0.1550 Information Technology 2008-2010 0.0580 0.0585 0.0701 0.0846 2011-2012 0.0894 0.0681 0.1076 0.1273 5 years 0.0705 0.0623 0.0851 0.1017 Telecommunication Services 2008-2010 0.1501 0.1615 0.1497 0.2409 2011-2012 0.2256 0.2084 0.2191 0.4009 5 years 0.1803 0.1803 0.1774 0.3049 Polarity Consumer Staples 2008-2010 0.0359 0.0383 0.0956 0.1054 2011-2012 0.0938 0.0270 0.1131 0.1285 5 years 0.0590 0.0338 0.1026 0.1147 p-value >>0.1000 0.0918 0.0489 Information Technology 2008-2010 0.0551 0.0332 0.0697 0.0763 2011-2012 0.0591 0.0516 0.0764 0.0857 5 years 0.0567 0.0405 0.0723 0.0801 p-value 0.0626 0.0948 0.0103 Telecommunication Services 2008-2010 0.0402 0.0464 0.0821 0.0745 2011-2012 0.0366 0.0781 0.0611 0.0809 5 years 0.0388 0.0591 0.0737 0.0770 p-value >>0.1000 0.0950 0.0222 Table 4: Average MCC for the change and polarity tasks by feature representation, for 2008-2010; for 2011-2012; for all 5 years and associated p-values of ANOVAs for comparison to BOW. gested as one of the best methods to summarize into a single value the confusion matrix of a binary classification task (Jurman and Furlanello, 2010; Baldi et al., 2000). Given the confusion matrix T P F N F P T N  : MCC = T P ·T N−F P ·F N √ (T P +F P )(T P +F N)(T N+F P )(T N+F N). All sentences with at least one company mention are used for the experiment. We remove stop words and use Stanford CoreNLP for partof-speech tagging and named entity recognition. Models are constructed using linear kernel support vector machines for both classification tasks. SVM-light with tree kernels3 (Joachims, 2006; Moschitti, 2006) is used for both the FWD and SemTree feature spaces. 6.2 Results Table 4 shows the mean MCC values for each task, for each sector. Separate means are shown for the test years of financial crisis (2008-2010) and economic recovery (2011-2012) to highlight the differences in performance that might result from market volatility. 3SVM-light: http://svmlight.joachims.org and Tree Kernels in SVM-light: http://disi.unitn.it/moschitti/TreeKernel.htm. pos. 1 dow, investors, index, retail, data pos. 2 costs, food, price, prices, named entity 4 neu. 1 q3, q1, nov, q2, apr neu. 2 cents, million, share, year, quarter neg. 1 cut, sales, prices, hurt, disappointing neg. 2 percent, call, company, fell, named entity 7 Table 5: Sample sLDA topics for consumer staples for test year 2010 (train on 2009), polarity task. SemTree combined with FWD (SemTreeFWD) generally gives the best performance in both change and polarity tasks. SemTree results here are based on the subset tree (SST) kernel, because of its greater precision in computing common frame structures and consistently better performance over the subtree (ST) kernel. SemTree also provides interpretable features for manual analysis as discussed in the next section. Analysis of Variance (ANOVA) tests were performed on the full 5 years for each sector, to compare each feature representation as a predictor of MCC score with the baseline BOW. The ANOVAs yield the p-values shown in Table 4. There were no significant differences from BOW on the change task. For polarity detection, SemTreeFWD was significantly better than BOW for each sector (see boldface p-values). No other method was significantly better than BOW, although FWD approaches significance on all sectors, and sLDA approaches significance on IT. sLDA has promising MCC scores for the telecommunication sector, which has only 8 companies, thus many fewer data instances. Table 5 displays a sample of sLDA topics with good performance on polarity for the consumer staples sector for training year 2009. The positive topics are related to stock index details and retail data. The negative topics contain many words with negative sentiment (e.g., hurt, disappointing). 7 Discussion 7.1 Semantic Parse Quality In general, SEMAFOR parses capture most of the important frames for our purposes. There is, however, significant room for improvement. On a small, randomly selected sample of sentences from all three sectors, two of the authors working independently evaluated the semantic parses, with approximately 80% agreement. Some of the inaccuracies in frame parses result from errors prior to the SEMAFOR parse, such as tokenization or 879 + (Target(jump)) + (RECIPIENT(Receiving)) + (VICTIM(Defend)) + (PERCEIVER AGENTIVE(Perception active(Target) (PERCEIVER AGENTIVE)(PHENOMENON))) + (DONOR(Giving(Target)(THEME)(DONOR))) + (Target(beats)) ... - (PHENOMENON(Perception active(Target)(PERCEIVER AGENTIVE)(PHENOMENON))) - (TRIGGER(Response)) - (Target(cuts)) - (VICTIM(Cause harm(Target(hurt))(VICTIM))) Figure 3: Best performing SemTree fragments for increase (+) and decrease (-) of price for consumer staples sector across training years. dependency parsing errors. The average sentence length for the sample was 33.3 words, with an average of 14 frames per sentence, 3 of them with a GICS company as a role filler. Because SemTree encodes only the frames containing a designated object (company), these are the frames we evaluated. On average, about half the frames with a designated object were correct, and two thirds of those frames we judged to be important. Besides errors due to incorrect tokenization or dependency parsing, we observed that about 8% to 10% of frames were incorrectly assigned to due word sense ambiguity. 7.2 Feature Analysis The experimental results show the SemTree space to be the one representation tested here that is significantly better than BOW, but only for the polarity task. Post hoc analysis indicates this may be due to the aptness of semantic frame parsing for polarity. Limitations in our treatment of time point to directions for improvement regarding the change task. Some strengths of our approach are the separate treatment of different sectors, and the benefits of SemTree features. To analyze which were the best performing features within sectors, we extracted the best performing frame fragments for the polarity task using a tree kernel feature engineering method presented in Pighin and Moschitti (2009). The algorithm selects the most relevant features in accordance with the weights estimated by SVM, and uses these features to build an explicit representation of the kernel space. Figure 3 shows the best performing SemTree fragments of the polarity task for the consumer staples sector. Recall that we hypothesized differences in semantic frame features across sectors. This shows up as large differences in the strength of features across sectors. More strikingly, the same feature can differ in polarity across sectors. For example, in consumer staples, (EVALUEE(Judgment communication)) has positive polarity, compared with negative polarity in information technology sector. The examples we see indicate that the positive cases pertain to aggressive retail practices that lead to lawsuits with only small fines, but whose larger impact benefits the bottom line. A typical case is the sentence, The plaintiffs accused Wal-Mart of discriminating against disabled customers by mounting “point-of-sale” terminals in many stores at elevated heights that cannot be reached. Lawsuits in the IT sector, on the other hand, are often about technology patent disputes, and are more negative, as illustrated by our example sentence in Figure 2. SemTree features capture the differences between semantic roles for the same frame, and between the same semantic role in different frames. For example, the PERCEIVER AGENTIVE role of the Perception active frame contributes to prediction of an increase in price, as in R.J. Reynolds is watching this situation closely and will respond as appropriate. Conversely, a company that fills the PHENOMENON role of the same frame contributes to prediction of a price decrease, as in Investors will get a clearer look at how the market values the Philip Morris tobacco businesses when Altria Group Inc. “when-issued” shares begin trading on Tuesday. When a company fills the VICTIM role in the Cause harm frame, this can predict a decrease in price, as in Hershey has been hurt by soaring prices for cocoa, energy and other commodities, whereas filling the VICTIM role in the Defend frame is associated with an increase in price, as in At Berkshire’s annual shareholder meeting earlier this month, Warren Buffett defended Wal-Mart , saying the scandal did not change his opinion of the company. One weakness of our approach that we discussed above is that there is a strong effect of time that we do not address. The same SemTree feature can be predictive for one time period and not for another. (GOODS(Commerce sell)) is related to a decrease in price for 2008 and 2009 but to an increase in price for 2010-2012. There is clearly an influence of the overall economic context that we do not take into account. For example, 880 the practices of acquiring or selling a business are different in downturning versus recovering markets. An important observation of the MCC values, especially in the case of SemTreeFWD is that MCC increases during the years 2011-2012. We attribute this change to the difficulty of predicting stock price trends when there is the high volatility typical of a financial crisis. The effect of news on volatility, however, can be explored independently. For example, Creamer et al. (2012) detect a strong association. Another weakness of our approach is that we take sentences out of context, which can lead to prediction errors. For example, the sentence Longs’ real estate assets alone are worth some $2.9 billion, or $71.50 per share, Ackman wrote, meaning that CVS would essentially be paying for real estate, but gaining Longs’ pharmacy benefit management business and retail operations for free is treated as predicting a positive polarity for CVS. This would be accurate if CVS was actually going to acquire Longs’ business. Later in the same news item, however, there is a sentence indicating that the sale will not go through, which predicts negative polarity for CVS: Pershing Square Capital Management said on Thursday it won’t support a tender offer from CVS Caremark Corp for rival Longs Drug Stores Corp because the offer price “materially understates the fair value of the company,” according to a filing. 8 Conclusion We have presented a model for predicting stock price movement from news. We proposed FWD (Frames, BOW, and part-of-speech specific DAL) features and SemTree data representations. Our semantic frame-based model benefits from tree kernel learning using support vector machines. The experimental results for our feature representation perform significantly better than BOW on the polarity task, and show promise on the change task. It also facilitates human interpretable analysis to understand the relation between a company’s market value and its business activities. The signals generated by this algorithm could improve the prediction of a financial time series model, such as ADS (Rydberg and Shephard, 2003). Our future work will consider the contextual information for sentence selection, and an aggregation of weighted news content based on the decay effect over time for individual companies. We plan to use a moving window for training and testing. We will also explore different labeling methods, such as a threshold for price change tuned by sectors and background economics. 9 Acknowledgements The authors thank the anonymous reviewers for their insightful comments. References Apoorv Agarwal, Fadi Biadsy, and Kathleen Mckeown. 2009. Contextual phrase-level polarity analysis using lexical affect scoring and syntactic N-grams. In Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009), pages 24– 32, Athens, Greece, March. Association for Computational Linguistics. Apoorv Agarwal, Boyi Xie, Ilia Vovsha, Owen Rambow, and Rebecca Passonneau. 2011. Sentiment analysis of twitter data. In Proceedings of the Workshop on Languages in Social Media, LSM ’11, pages 30–38. Association for Computational Linguistics. Pierre Baldi, Søren Brunak, Yves Chauvin, Claus A. F. Andersen, and Henrik Nielsen. 2000. Assessing the accuracy of prediction algorithms for classification: an overview. Bioinformatics, 16:412 – 424. Roy Bar-Haim, Elad Dinur, Ronen Feldman, Moshe Fresko, and Guy Goldstein. 2011. Identifying and following expert investors in stock microblogs. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1310–1319, Edinburgh, Scotland, UK., July. Association for Computational Linguistics. David M. Blei and Jon D. McAuliffe. 2007. Supervised topic models. In Advances in Neural Information Processing Systems, Proceedings of the TwentyFirst Annual Conference on Neural Information Processing Systems, Vancouver, British Columbia, Canada, December 3-6. Christopher Chua, Maria Milosavljevic, and James R. Curran. 2009. A sentiment detection engine for internet stock message boards. In Proceedings of the Australasian Language Technology Association Workshop 2009, pages 89–93, Sydney, Australia, December. Michael Collins and Nigel Duffy. 2002. New ranking algorithms for parsing and tagging: kernels over discrete structures, and the voted perceptron. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL ’02, pages 263– 270, Stroudsburg, PA, USA. Association for Computational Linguistics. 881 Germ´an G. Creamer, Yong Ren, and Jeffrey V. Nickerson. 2012. A Longitudinal Analysis of Asset Return, Volatility and Corporate News Network. In Business Intelligence Congress 3 Proceedings. Dipanjan Das and Noah A. Smith. 2011. Semisupervised frame-semantic parsing for unknown predicates. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1, HLT ’11, pages 1435–1444, Stroudsburg, PA, USA. Association for Computational Linguistics. Dipanjan Das and Noah A. Smith. 2012. Graph-based lexicon expansion with sparsity-inducing penalties. In HLT-NAACL, pages 677–687. The Association for Computational Linguistics. Scott Deerwester, Susan T. Dumais, George W. Furnas, Thomas K. Landauer, and Richard Harshman. 1990. Indexing by latent semantic analysis. Journal of the American Society for Information Science. Ann Devitt and Khurshid Ahmad. 2007. Sentiment polarity identification in financial news: A cohesionbased approach. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 984–991, Prague, Czech Republic, June. Association for Computational Linguistics. William Dolan, Chris Quirk, and Chris Brockett. 2004. Unsupervised construction of large paraphrase corpora: Exploiting massively parallel news sources. Proceedings of the 20th International Conference on Computational Linguistics. Joseph Engelberg and Christopher A. Parsons. 2011. The causal impact of media in financial markets. Journal of Finance, 66(1):67–97. Charles J. Fillmore, Christopher R. Johnson, and Miriam R. L. Petruck. 2003. Background to Framenet. International Journal of Lexicography, 16(3):235–250, September. Charles J. Fillmore. 1976. Frame semantics and the nature of language. Annals of the New York Academy of Sciences, 280(1):20–32. Syed Aqueel Haider and Rishabh Mehrotra. 2011. Corporate news classification and valence prediction: A supervised approach. In Proceedings of the 2nd Workshop on Computational Approaches to Subjectivity and Sentiment Analysis (WASSA 2.011), pages 175–181, Portland, Oregon, June. Association for Computational Linguistics. Thorsten Joachims. 2006. Training linear svms in linear time. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, KDD ’06, pages 217–226, New York, NY, USA. ACM. Giuseppe Jurman and Cesare Furlanello. 2010. A unifying view for performance measures in multi-class prediction. ArXiv e-prints. Soo-Min Kim and Eduard Hovy. 2006. Extracting opinions, opinion holders, and topics expressed in online news media text. In Proceedings of the Workshop on Sentiment and Subjectivity in Text, SST ’06, pages 1–8, Stroudsburg, PA, USA. Association for Computational Linguistics. Shimon Kogan, Dimitry Levin, Bryan R. Routledge, Jacob S. Sagi, and Noah A. Smith. 2009. Predicting risk from financial reports with regression. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, NAACL ’09, pages 272–280, Stroudsburg, PA, USA. Association for Computational Linguistics. Victor Lavrenko, Matt Schmill, Dawn Lawrie, Paul Ogilvie, David Jensen, and James Allan. 2000. Mining of concurrent text and time series. In In proceedings of the 6th ACM SIGKDD Int’l Conference on Knowledge Discovery and Data Mining Workshop on Text Mining, pages 37–44. Ronny Luss and Alexandre d’Aspremont. 2008. Predicting abnormal returns from news using text classification. CoRR, abs/0809.2792. Brian W. Matthews. 1975. Comparison of the predicted and observed secondary structure of t4 phage lysozyme. Biochimica et Biophysica Acta (BBA) Protein Structure, 405(2):442 – 451. Alessandro Moschitti. 2006. Making tree kernels practical for natural language learning. In In Proceedings of the 11th Conference of the European Chapter of the Association for Computational Linguistics. Daniele Pighin and Alessandro Moschitti. 2009. Reverse engineering of tree kernel feature spaces. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, EMNLP 2009, 6-7 August 2009, Singapore, pages 111–120. Eduardo J. Ruiz, Vagelis Hristidis, Carlos Castillo, Aristides Gionis, and Alejandro Jaimes. 2012. Correlating financial time series with micro-blogging activity. In Proceedings of the fifth ACM international conference on Web search and data mining, WSDM ’12, pages 513–522, New York, NY, USA. ACM. Josef Ruppenhofer and Ines Rehbein. 2012. Semantic frames as an anchor representation for sentiment analysis. In Proceedings of the 3rd Workshop in Computational Approaches to Subjectivity and Sentiment Analysis, WASSA ’12, pages 104– 109, Stroudsburg, PA, USA. Association for Computational Linguistics. Tina H. Rydberg and Neil Shephard. 2003. Dynamics of Trade-by-Trade Price Movements: Decomposition and Models. Journal of Financial Econometrics, 1(1):2–25. 882 Paul C. Tetlock, Maytal Saar-Tsechansky, and Sofus Macskassy. 2008. More than Words: Quantifying Language to Measure Firms’ Fundamentals. The Journal of Finance. Paul C. Tetlock. 2007. Giving Content to Investor Sentiment: The Role of Media in the Stock Market. The Journal of Finance. Cynthia M. Whissel. 1989. The dictionary of affect in language. Emotion: Theory, Research, and Experience, 39(4):113–131. 883
2013
86
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 884–893, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Density Maximization in Context-Sense Metric Space for All-words WSD Koichi Tanigaki†‡ Mitsuteru Shiba† Tatsuji Munaka† Yoshinori Sagisaka‡ † Information Technology R&D Center, Mitsubishi Electric Corporation 5-1-1 Ofuna, Kamakura, Kanagawa 247-8501, Japan ‡ Global Information and Telecommunication Institute, Waseda University 1-3-10 Nishi-Waseda, Shinjuku-ku, Tokyo 169-0051, Japan Abstract This paper proposes a novel smoothing model with a combinatorial optimization scheme for all-words word sense disambiguation from untagged corpora. By generalizing discrete senses to a continuum, we introduce a smoothing in context-sense space to cope with data-sparsity resulting from a large variety of linguistic context and sense, as well as to exploit senseinterdependency among the words in the same text string. Through the smoothing, all the optimal senses are obtained at one time under maximum marginal likelihood criterion, by competitive probabilistic kernels made to reinforce one another among nearby words, and to suppress conflicting sense hypotheses within the same word. Experimental results confirmed the superiority of the proposed method over conventional ones by showing the better performances beyond most-frequent-sense baseline performance where none of SemEval2 unsupervised systems reached. 1 Introduction Word Sense Disambiguation (WSD) is a task to identify the intended sense of a word based on its context. All-words WSD is its variant, where all the unrestricted running words in text are expected to be disambiguated. In the all-words task, all the senses in a dictionary are potentially the target destination of classification, and purely supervised approaches inherently suffer from data-sparsity problem. The all-words task is also characterized by sense-interdependency of target words. As the target words are typically taken from the same text string, they are naturally expected to be interrelated. Disambiguation of a word should affect other words as an important clue. From such characteristics of the task, knowledge-based unsupervised approaches have been extensively studied. They compute dictionary-based sense similarity to find the most related senses among the words within a certain range of text. (For reviews, see (Agirre and Edmonds, 2006; Navigli, 2009).) In recent years, graph-based methods have attracted considerable attentions (Mihalcea, 2005; Navigli and Lapata, 2007; Agirre and Soroa, 2009). On the graph structure of lexical knowledge base (LKB), random-walk or other well-known graph-based techniques have been applied to find mutually related senses among target words. Unlike earlier studies disambiguating word-by-word, the graph-based methods obtain sense-interdependent solution for target words. However, those methods mainly focus on modeling sense distribution and have less attention to contextual smoothing/generalization beyond immediate context. There exist several studies that enrich immediate context with large corpus statistics. McCarthy et al. (2004) proposed a method to combine sense similarity with distributional similarity and configured predominant sense score. Distributional similarity was used to weight the influence of context words, based on large-scale statistics. The method achieved successful WSD accuracy. Agirre et al. (2009) used a k-nearest words on distributional similarity as context words. They apply a LKB graph-based WSD to a target word together with the distributional context words, and showed that it yields better results on a domain dataset than just using immediate context words. Though these 884 studies are word-by-word WSD for target words, they demonstrated the effectiveness to enrich immediate context by corpus statistics. This paper proposes a smoothing model that integrates dictionary-based semantic similarity and corpus-based context statistics, where a combinatorial optimization scheme is employed to deal with sense interdependency of the all-words WSD task. The rest of this paper is structured as follows. We first describe our smoothing model in the following section. The combinatorial optimization method with the model is described in Section 3. Section 4 describes a specific implementation for evaluation. The evaluation is performed with the SemEval-2 English all-words dataset. We present the performance in Section 5. In Section 6 we discuss whether the intended context-to-sense mapping and the sense-interdependency are properly modeled. Finally we review related studies in Section 7 and conclude in Section 8. 2 Smoothing Model Let us introduce in this section the basic idea for modeling context-to-sense mapping. The distance (or similarity) metrics are assumed to be given for context and for sense. A specific implementation of these metrics is described later in this paper, for now the context metric is generalized with a distance function dx(·, ·) and the sense metric with ds(·, ·). Actually these functions may be arbitrary ones that accept two elements and return a positive real number. Now suppose we are given a dataset concerning N number of target words. This dataset is denoted by X = {xi}N i=1, where xi corresponds to the context of the i-th word but not the word by itself. For each xi, the intended sense of the word is to be found in a set of sense candidates Si = {sij}Mi j=1 ⊆S, where Mi is the number of sense candidates for the i-th word, S is the whole set of sense inventories in a dictionary. Let the two-tuple hij = (xi, sij) be the hypothesis that the intended sense in xi is sij. The hypothesis is an element of the direct product H = X × S. As (X, dx) and (S, ds) each composes a metric space, H is also a metric space, provided a proper distance definition with dx and ds. Here, we treat the space H as a continuous one, which means that we assume the relationship between context and sense can be generalized in continuous fashion. In natural language processing, continuity has been sometimes assumed for linguistic phenomena including word context for corpus based WSD. As for classes or senses, it may not be a common assumption. However, when the classes for all-words WSD are enormous, finegrained, and can be associated with distance, we can rather naturally assume the continuity also for senses. According to the nature of continuity, once given a hypothesis hij for a certain word, we can extrapolate the hypothesis for another word of another sense hi′j′ = (xi′, si′j′) sufficiently close to hij. Using a Gaussian kernel (Parzen, 1962) as a smoothing model, the probability density extrapolated at hi′j′ given hij is defined by their distance as follows: K(hij, hi′j′) (1) ≡ 1 2πσxσs exp [ −dx2(xi, xi′) 2σx2 −ds2(sij, si′j′) 2σs2 ] , where σx and σs are parameters of positive real number σx, σs ∈R+ called kernel bandwidths. They control the smoothing intensity in context and in sense, respectively. Our objective is to determine the optimal sense for all the target words simultaneously. It is essentially a 0-1 integer programing problem, and is not computationally tractable. We relax the integer constraints by introducing a sense probability parameter πij corresponding to each hij. πij denotes the probability by which hij is true. As πij is a probability, it satisfies the constraints ∀i ∑ j πij = 1 and ∀i, j 0 ≤πij ≤1. The probability density extrapolated at hi′j′ by a probabilistic hypothesis hij is given as follows: Qij(hi′j′) ∝πij K(hij, hi′j′). (2) The proposed model is illustrated in Figure 1. Due to the limitation of drawing, both the context metric space and the sense metric space are drawn schematically as 1-dimensional spaces (axes), actually arbitrary metric spaces similarity-based or feature-based are applicable. The product metric space of the context metric space and the sense metric space composes a hypothesis space. In the hypothesis space, n sense hypotheses for a certain word is represented as n points on the hyperplane that spreads across the sense metric space. The two small circles in the middle of the figure represent the two sense hypotheses for a single word. The position of a hypothesis represents which sense is assigned to the current word in 885 decoy flora H.B. Tree(actor) tree diagram tree Sense Probability Hypothesis Context Metric Space Context (Input) Sense Metric Space Extrapolated Density Sense (Class) "Invasive, exotic plants cause particular problems for wildlife." "Exotic tree" Figure 1: Proposed probability distribution model for context-to-sense mapping space. what context. The upward arrow on a hypothesis represents the magnitude of its probability. Centered on each hypotheses, a Gaussian kernel is placed as a smoothing model. It extrapolates the hypotheses of other words around it. In accordance with geometric intuition, intensity of extrapolation is affected by the distance from a hypothesis, and by the probability of the hypothesis by itself. Extrapolated probability density is represented by shadow thickness and surface height. If there is another word in nearby context, the kernels can validate the sense of that word. In the figure, there are two kernels in the context “Invasive, exotic ...”. They are two competing hypothesis for the senses decoy and flora of the word plants. These kernels affect the senses of another ambiguous word tree in nearby context “Exotic ...”, and extrapolate the most at the sense tree nearby flora. The extrapolation has non-linear effect. It affects little to the word far away in context or in sense as is the case for the background word in the figure. Strength of smoothing is determined by kernel bandwidths. Wider bandwidths bring stronger effect of generalization to further hypotheses, but too wide bandwidths smooth out detailed structure. The bandwidths are the key for disambiguation, therefore they are to be optimized on a dataset together with sense probabilities. 3 Simultaneous Optimization of All-words WSD Given the smoothing model to extrapolate the senses of other words, we now make its instances interact to obtain the optimal combination of senses for all the words. 3.1 Likelihood Definition Let us first define the likelihood of model parameters for a given dataset. The parameters consist of a context bandwidth σx, a sense bandwidth σs, and sense probabilities πij for all i and j. For convenience of description, the sense probabilities are all together denoted as a vector π = (. . . , πij, . . . )⊤, in which actual order is not the matter. Now remind that our dataset X = {xi}N i=1 is composed of N instances of unlabeled word context. We consider all the mappings from context to sense are latent, and find the optimal parameters by maximizing marginal pseudo likelihood based on probability density. The likelihood is defined as follows: L(π, σx, σs; X) ≡ln ∏ i ∑ j πijQ(hij), (3) where ∏ i denotes the product over xi ∈X, ∑ j denotes the summation over all possible senses sij ∈Si for the current i-th context. Q(hij) denotes the probability density at hij. We compute Q(hij) using leave-one-out cross-validation (LOOCV), so as to prevent kernels from overfitting to themselves, as follows: Q(hij) (4) ≡ 1 N −Ni ∑ i′: wi′̸=wi ∑ j′ πi′j′K(hij, hi′j′), where Ni denotes the number of occurrences of a word type wi in X, and ∑ i′: wi′̸=wi denotes the summation over xi′ ∈X except the case that the word type wi′ equals to wi. ∑ j′ denotes the summation over si′j′ ∈Si′. We take as the unit of LOOCV not a word instance but a word type, because the instances of the same word type invariably have the same sense candidates, which still cause over-fitting when optimizing the sense bandwidth. 3.2 Parameter Optimization We are now ready to calculate the optimal senses. The optimal parameters π∗, σ∗ x, σ∗ s are obtained by maximizing the likelihood L subject to the constraints on π, that is ∀i ∑ j πij = 1 1. Using the Lagrange multipliers {λi}N i=1 for every i-th constraint, the solution for the constrained maximiza1It is guaranteed that the other constraints ∀i, j 0 ≤πij ≤ 1 are satisfied according to Equation (7). 886 tion of L is obtained as the solution for the equivalent unconstrained maximization of ˇL as follows: π∗, σ∗ x, σ∗ s = arg max π, σx, σs ˇL, (5) where ˇL ≡L + ∑ i λi (∑ j πij −1 ) . (6) When we optimize the parameters, the first term of Equation (6) in the right-hand side acts to reinforce nearby hypotheses among different words, whereas the second term acts to suppress conflicting hypotheses of the same word. Taking ∇ˇL = 0, erasing λi, and rearranging, we obtain the optimal parameters as follows: πij = ∑ i′, j′ wi′ ̸=wi R ij i′j′ + ∑ i′, j′ wi′ ̸=wi R i′j′ ij 1 + ∑ j ∑ i′, j′ wi′ ̸=wi R i′j′ ij (7) σx2 = 1 N ∑ i, i′, j, j′ wi′ ̸=wi R ij i′j′ dx2(xi, xi′) (8) σs2 = 1 N ∑ i, i′, j, j′ wi′ ̸=wi R ij i′j′ ds2(sij, si′j′), (9) where R ij i′j′ denotes the responsibility of hi′j′ to hij: the ratio of total expected density at hij, taken up by the expected density extrapolated by hi′j′, normalized to the total for xi be 1. It is defined as R ij i′j′ ≡ πijQi′j′(hij) ∑ j πijQ(hij). (10) Qi′j′(hij) denotes the probability density at hij extrapolated by hi′j′ alone, defined as follows: Qi′j′(hij) ≡ 1 N −Ni πi′j′K(hij, hi′j′). (11) Intuitively, Equations (7)-(9) are interpreted as follows. As for Equation (7), the right-hand side of the equation can be divided as the left term and the right term both in the numerator and in the denominator. The left term requires πij to agree with the ratio of responsibility of the whole to hij. The right term requires πij to agree with the ratio of responsibility of hij to the whole. As for Equation (8), (9), the optimal solution is the mean squared distance in context, and in sense, weighted by responsibility. To obtain the actual values of the optimal parameters, EM algorithm (Dempster et al., 1977) is applied. This is because Equations (7)-(9) are circular definitions, which include the objective parameters implicitly in the right hand side, thus the solution is not obtained analytically. EM algorithm is an iterative method for finding maximum likelihood estimates of parameters in statistical models, where the model depends on unobserved latent variables. Applying the EM algorithm to our model, we obtain the following steps: Step 1. Initialization: Set initial values to π, σx, and σs. As for sense probabilities, we set the uniform probability in accordance with the number of sense candidates, thereby πij ←|Si|−1, where |Si| denotes the size of Si. As for bandwidths, we set the mean squared distance in each metric; thereby σx2 ← N−1 ∑ i, i′ dx2(xi, xi′) for context bandwidth, and σs2 ← (∑ i|Si|)−1 ∑ i, i′ ∑ j, j′ ds2(sij, si′j′) for sense bandwidth. Step 2. Expectation: Using the current parameters π, σx, and σs, calculate the responsibilities R ij i′j′ according to Equation (10). Step 3. Maximization: Using the current responsibility R ij i′j′, update the parameters π, σx, and σs, according to Equation (7)-(9). Step 4. Convergence test: Compute the likelihood. If its ratio to the previous iteration is sufficiently small, or predetermined number of iterations has been reached, then terminate the iteration. Otherwise go back to Step 2. To visualize how it works, we applied the above EM algorithm to pseudo 2-dimensional data. The results are shown in Figure 2. It simulates WSD for an N = 5 words dataset, whose contexts are depicted by five lines. The sense hypotheses are depicted by twelve upward arrows. At the base of each arrow, there is a Gaussian kernel. Shadow thickness and surface height represents the composite probability distribution of all the twelve kernels. Through the iterative parameter update, sense probabilities and kernel bandwidths were optimized to the dataset. Figure 2(a) illustrates the initial status, where all the sense hypothesis are equivalently probable, thus they are in the most ambiguous status. Initial bandwidths are set to the mean squared distance of all the hypotheses pairs, 887 Context (Input) Sense (Class) (a) Initial status. Context (Input) Sense (Class) (b) Status after the 7th iteration. Context (Input) Sense (Class) (c) Converged status after 25 iterations. Figure 2: Pseudo 2D data simulation to visualize the dynamics of the proposed simultaneous all-words WSD with ambiguous five words and twelve sense hypotheses. (There are twelve Gaussian kernels at the base of each arrow, though the figure shows just their composite distribution. Those kernels reinforce and compete one another while being fitted their affecting range, and finally settle down to the most consistent interpretation for the words with appropriate generalization. For the dynamics with an actual dataset, see Figure 5.) which is rather broad and makes kernels strongly smoothed, thus the model captures general structure of space. Figure 2(b) shows the status after the 7th iteration. Bandwidths are shrinking especially in context, and two context clusters, so to speak, two usages, are found. Figure 2(c) shows the status of convergence after 25 iterations. All the arrow lengths incline to either 1 or 0 along with their neighbors, thus all the five words are now disambiguated. Note that this is not the conventional clustering of observed data. If, for instance, the Gaussian mixture clustering of 2-mixtures is applied to the positions of these hypotheses, it will find the clusters just like Figure 2(b) and will stop. The cluster centers are located at the means of hypotheses including miscellaneous alternatives not intended, thus the estimated probability distribution is, roughly speaking, offset toward the center of WordNet, which is not what we want. In contrast, the proposed method proceeds to Figure 2(c) and finds clusters in the data after conflicting data is erased. This is because our method is aiming at modeling not the disambiguation of clustermemberships but the disambiguation of senses for each word. 4 Metric Space Implementation So far, we have dealt with general metrics for context and for sense. This section describes a specific implementation of those metrics employed in the evaluation. We followed the previous study by McCarthy et al. (2004), (2007), and implemented a type-based WSD. The context of word instances are tied to the distributional context of the word type in a large corpus. To calculate sense similarities, we used the WordNet similarity package by Pedersen et al. (2004), version 2.05. Two measures proposed by Jiang and Conrath (1997) and Lesk (1986) were examined, which performed best in the previous study (McCarthy et al., 2004). Distributional similarity (Lin, 1998) was computed among target words, based on the statistics of the test set and the background text provided as the official dataset of the SemEval-2 English all-words task (Agirre et al., 2010). Those texts were parsed using RASP parser (Briscoe et al., 2006) version 3.1, to obtain grammatical relations for the distributional similarity, as well as to obtain lemmata and part-of-speech (POS) tags which are required to look up the sense inventory of WordNet. Based on the distributional similarity, we just used k-nearest neighbor words as the context of each target word. Although it is an approximation, we can expect reliability improvement often seen by ignoring the lower part. In addition, this limitation of interactions highly reduces computational cost in particular when applying to larger-scale problems. To do this, the exhaustive sum ∑ i, i′: wi̸=wi′ in Equation (7)-(9) is altered by the local sum ∑ i, i′: (wi,wi′)∈PkNN, where PkNN denotes the set of word pairs of which either is a k-nearest neighbors of the other. The normalizing factors 1, N, and N −Ni in Equation (7), (8)-(9), and (11) are altered by the actual sum of responsibilities within those neighbors as ∑ i′, j, j′: (wi,wi′)∈PkNNR ij i′j′, 888 ∑ i, i′, j, j′: (wi,wi′)∈PkNNR ij i′j′, and ∑ ι, i′, j, j′: (wι,wi′)∈PkNN ∧ι̸=i R ιj i′j′, respectively. To treat the above similarity functions of context and of sense as distance functions, we use the conversion: d(·, ·) ≡−α ln(f(·, ·)/fmax), where d denotes the objective distance function, i.e., dx for context and ds for sense, while f and fmax denote the original similarity function and its maximum, respectively. α is a standardization coefficient, which is determined so that the mean squared distance be 1 in a dataset. According to this standardization, initial values of σx2, σs2 are always 1. 5 Evaluation To confirm the effect of the proposed smoothing model and its combinatorial optimization scheme, we conducted WSD evaluations. The primary evaluations compare our method with conventional ones, in Section 5.2. Supplementary evaluations are described in the subsequent sections that include the comparison with SemEval-2 participating systems, and the analysis of model dynamics with the experimental data. 5.1 Evaluation Scheme To make the evaluation comparable to state-ofthe-art systems, we used the official dataset of the SemEval-2 English all-words WSD task (Agirre et al., 2010), which is currently the latest public dataset available with published results. The dataset consists of test data and background documents of the same environment domain. The test data consists of 1,398 target words (1,032 nouns and 366 verbs) in 5.3K running words. The background documents consists of 2.7M running words, which was used to compute distributional similarity. Precisions and recalls were all computed using the official evaluation tool scorer2 in finegrained measure. The tool accepts answers either in probabilistic format (senses with probabilities for each target word) or in deterministic format (most likely senses, with no score information). As the proposed method is a probability model, we evaluated in the probabilistic way unless explicitly noted otherwise. For this reason, we evaluated all the sense probabilities as they were. Disambiguations were executed in separate runs for nouns and verbs, because no interaction takes place across POS in this metric implementation. The two runs’ results were combined later to a single answer to be input to scorer2. The context metric space was composed by knearest neighbor words of distributional similarity (Lin, 1998), as is described in Section 4. The value of k was evaluated for {2, 3, 5, 10, 20, 30, 50, 100, 200, 300}. As for sense metric space, we evaluated two measures i.e., (Jiang and Conrath, 1997) denoted as JCN, and (Lesk, 1986) denoted as Lesk. In every condition, stopping criterion of iteration is always the number of iteration (500 times), irrespective of the convergence in likelihood. Primary evaluations compared our method with two conventional methods. Those methods differ to ours only in scoring schemes. The first one is the method by McCarthy et al. (2004), which determines the word sense based on sense similarity and distributional similarity to the k-nearest neighbor words of a target word by distributional similarity. Our major advantage is the combinatorial optimization framework, while the conventional one employs word-by-word scheme. The second one is based on the method by Patwardhan et al. (2007), which determines the word sense by maximizing the sum of sense similarity to the k immediate neighbor words of a target word. The k words were forced to be selected from other target words of the same POS to the word of interest, so as to make information resource equivalent to the other comparable two methods. It is also a wordby-word method. It exploits no distributional similarity. Our major advantages are the combinatorial optimization scheme and the smoothing model to integrate distributional similarity. In the following section, these comparative methods are referred to as Mc2004 and Pat2007, respectively. 5.2 Comparison with Conventional Methods Let us first confirm our advantages compared to the conventional methods of Mc2004 and Pat2007. The comparative results are shown in Figure 3 in recall measure. Precisions are simply omitted because the difference to the recalls are always the number of failures on referring to WordNet by mislabeling of lemmata or POSs, which is always the same for the three methods. Vertical range depicts 95% confidence intervals. The graphs also indicate the most-frequent-sense (MFS) baseline estimated from out-of-domain corpora, whose recall is 0.505 (Agirre et al., 2010). As we can see in Figure 3(a) and 3(b), higher 889 Context k-NN Context k-NN 0.4 0.5 1 10 100 1000 0.4 0.5 1 10 100 1000 MFS Recall (a) JCN (b) Lesk Recall MFS Proposed Mc2004 Pat2007 Proposed Mc2004 Pat2007 Figure 3: Comparison to the conventional methods that differ to our method only in scoring schemes. Table 1: Comparison with the top-5 knowledgebased systems in SemEval-2 (JCN/k=5). Rank Participants R P Rn Rv Proposed (best) 50.8 51.0 52.5 46.2 MFS Baseline 50.5 50.5 52.7 44.3 1 Kulkarni et al. (2010) 49.5 51.2 51.6 43.4 2 Tran et al. (2010) 49.3 50.6 51.6 42.6 3 Tran et al. (2010) 49.1 50.4 51.5 42.5 4 Soroa et al. (2010) 48.1 48.1 48.7 46.2 5 Tran et al. (2010) 47.9 49.2 49.4 43.4 ... ... ... ... ... ... Random Baseline 23.2 23.2 25.3 17.2 recalls are obtained in the order of the proposed method, Mc2004, and Pat2007 on the whole. Comparing JCN and Lesk, difference among the three is smaller in Lesk. It is possibly because Lesk is a score not normalized for different word pairs, which makes the effect of distributional similarity unsteady especially when combining many k-nearest words. Therefore the recalls are expected to improve if proper normalization is applied to the proposed method and Mc2004. In JCN, the recalls of the proposed method significantly improve compared to Pat2007. Our best recall is 0.508 with JCN and k = 5. Thus we can conclude that, though significance depends on metrics, our smoothing model and the optimization scheme are effective to improve accuracies. 5.3 Comparison with SemEval-2 Systems We compared our best results with the participating systems of the task. Table 1 compares the details to the top-5 systems, which only includes unsupervised/knowledge-based ones and excludes supervised/weakly-supervised ones. Those values 0.3 0.4 0.5 Recall MFS Proposed (best) Rank Figure 4: Comparison with the all 20 knowledgebased systems in SemEval-2 (JCN/k=5). 0.5 1 0 100 200 300 400 500 1.08 1.09 Sense Probability 0 1 0 100 200 300 400 500 Iteration Context Bandwidth Sense Bandwidth σx 2 σs 2 πij Figure 5: Model dynamics through iteration with SemEval-2 nouns (JCN/k=5). are transcribed from the official report (Agirre et al., 2010). “R” and “P” denote the recall and the precision for the whole dataset, while “Rn” and “Rv” denote the recall for nouns and verbs, respectively. The results are ranked by “R”, in accordance with the original report. As shown in the table, our best results outperform all of the systems and the MFS baseline. Overall rankings are depicted in Figure 4. It maps our best results in the distribution of all the 20 unsupervised/knowledge-based participating systems. The ranges spreading left and right are 95% confidence intervals. As is seen from the figure, our best results are located above the top group, which are outside the confidence intervals of the other participants ranked intermediate or lower. 5.4 Analysis on Model Dynamics This section examines the model dynamics with the SemEval-2 data, which has been illustrated 890 0.4 0.5 0 100 200 300 400 500 Recall Iteration JCN Lesk Probabilistic Deterministic Probabilistic Deterministic Figure 6: Recall improvement via iteration with SemEval-2 all POSs (JCN/k=30, Lesk/k=10). with pseudo data in Section 3.2. Let us start by looking at the upper half of Figure 5, which shows the change of sense probabilities through iteration. At the initial status (iteration 0), the probabilities were all 1/2 for bi-semous words, all 1/3 for tri-semous words, and so forth. As iteration proceeded, the probabilities gradually spread out to either side of 1 or 0, and finally at iteration 500, we can observe that almost all the words were clearly disambiguated. The lower half of Figure 5 shows the dynamics in bandwidths. Vertical axis on the left is for the sense bandwidth, and on the right is for the context bandwidth. We can observe those bandwidths became narrower as iteration proceeded. Intensity of smoothing was dynamically adjusted by the whole disambiguation status. These behaviors confirm that even with an actual dataset, it works as is expected, just as illustrated in Figure 2. 6 Discussion This section discusses the validity of the proposed method as to i) sense-interdependent disambiguation and ii) reliability of data smoothing. We here analyze the second peak conditions at k = 30 (JCN) and k = 10 (Lesk) instead of the first peak at k = 5, because we can observe tendency the better with the larger number of word interactions. 6.1 Effects of Sense-interdependent Disambiguation Let us first examine the effect of our senseinterdependent disambiguation. We would like to confirm that how the progressive disambiguation is carried out. Figure 6 shows the change of recall through iteration for JCN (k = 30) and Lesk (k = 10). Those recalls were obtained by evaluating the status after each iteration. The recalls were here evaluated both in probabilistic format and in deterministic format. From the figure we can observe that the deterministic recalls also increased as well as the probabilistic recalls. This means that the ranks of sense candidates for each word were frequently altered through iteration, which further means that some new information not obtained earlier was delivered one after another to sense disambiguation for each word. From these results, we could confirm the expected sense-interdependency effect that a sense disambiguation of certain word affected to other words. 6.2 Reliability of Smoothing as Supervision Let us now discuss the reliability of our smoothing model. In our method, sense disambiguation of a word is guided by its nearby words’ extrapolation (smoothing). Sense accuracy fully depends on the reliability of the extrapolation. Generally speaking, statistical reliability increases as the number of random sampling increases. If we take sufficient number of random words as nearby words, the sense distribution comes close to the true distribution, and then we expect the statistically true sense distribution should find out the true sense of the target word, according to the distributional hypotheses (Harris, 1954). On the contrary, if we take nearby words that are biased to particular words, the sense distribution also becomes biased, and the extrapolation becomes less reliable. We can compute the randomness of words that affect for sense disambiguation, by word perplexity. Let the word of interest be w ∈V . The word perplexity is calculated as 2H|w, where H|w denotes the entropy defined as H|w ≡ −∑ w′∈V \{w} p(w′|w) log2 p(w′|w). The conditional probability p(w′|w) denotes the probability with which a certain word w′ ∈ V \ {w} determines the sense of w, which can be defined as the density ratio: p(w′|w) ∝ ∑ i: wi=w ∑ i′: wi′=w′ ∑ j,j′ Qi′j′(hij). The relation between word perplexity and probability change for ground-truth senses of nouns (JCN/k = 30) is shown in Figure 7. The upper histogram shows the change in iteration 1-100, and the lower shows that of iteration 101-500. We divide the analysis at iteration 100, because roughly until the 100th iteration, the change in bandwidths converged, and the number of words to interact settled, as can be seen in Figure 5. The bars that 891 -15 -10 -5 0 5 10 15 0 20 40 60 80 100 -50 0 50 100 150 0 20 40 60 80 100 X ∆ Iteration 1 to 100 Iteration 101 to 500 Perplexity of Extrapolator Words Prob. of Ground-truth Sense Correct Wrong Correct Wrong Figure 7: Correlation between reliability and perplexity with SemEval-2 nouns (JCN/k=30). extend upward represent the sum of the amount raised (correct change), and the bars that extend downward represent the sum of the amount reduced (wrong change). From these figures, we observe that when perplexity is sufficiently large (≥30), change occurred largely (79%) to the correct direction. In contrast, at the lower left of the figure, where perplexity is small (< 30) and bandwidths has been narrowed at iteration 101-500, correct change occupied only 32% of the whole. Therefore, we can conclude that if sufficiently random samples of nearby words are provided, our smoothing model is reliable, though it is trained in an unsupervised fashion. 7 Related Work As described in Section 1, graph-based WSD has been extensively studied, since graphs are favorable structure to deal with interactions of data on vertices. Conventional studies typically consider as vertices the instances of input or target class, e.g. knowledge-based approaches typically regard senses as vertices (see Section 1), and corpusbased approaches such as (V´eronis, 2004) regard words as vertices or (Niu et al., 2005) regards context as vertices. Our method can be viewed as one of graph-based methods, but it regards input-toclass mapping as vertices, and the edges represent the relations both together in context and in sense. Mihalcea (2005) proposed graph-based methods, whose vertices are sense label hypotheses on word sequence. Our method generalize context representation. In the evaluation, our method was compared to SemEval-2 systems. The main subject of the SemEval-2 task was domain adaptation, therefore those systems each exploited their own adaptation techniques. Kulkarni et al. (2010) used a WordNet pre-pruning. Disambiguation is performed by considering only those candidate synsets that belong to the top-k largest connected components of the WordNet on domain corpus. Tran et al. (2010) used over 3TB domain documents acquired by Web search. They parsed those documents and extracted the statistics on dependency relation for disambiguation. Soroa et al. (2010) used the method by Agirre et al. (2009) described in Section 1. They disambiguated each target word using its distributionally similar words instead of its immediate context words. The proposed method is an extension of density estimation (Parzen, 1962), which is a construction of an estimate based on observed data. Our method naturally extends the density estimation in two points, which make it applicable to unsupervised knowledge-based WSD. First, we introduce stochastic treatment of data, which is no longer observations but hypotheses having ambiguity. This extension makes the hypotheses possible to crossvalidate the plausibility each other. Second, we extend the definition of density from Euclidean distance to general metric, which makes the proposed method applicable to a wide variety of corpus-based context similarities and dictionarybased sense similarities. 8 Conclusions We proposed a novel smoothing model with a combinatorial optimization scheme for all-words WSD from untagged corpora. Experimental results showed that our method significantly improves the accuracy of conventional methods by exceeding most-frequent-sense baseline performance where none of SemEval-2 unsupervised systems reached. Detailed inspection of dynamics clearly show that the proposed optimization method effectively exploit the sense-dependency of all-words. Moreover, our smoothing model, though unsupervised, provides reliable supervision when sufficiently random samples of words are available as nearby words. Thus it was confirmed that this method is valid for finding the optimal combination of word senses with large untagged corpora. We hope this study would elicit further investigation in this important area. 892 References Eneko Agirre and Philip Edmonds. 2006. Word sense disambiguation: Algorithms and applications, volume 33. Springer Science+ Business Media. Eneko Agirre and Aitor Soroa. 2009. Personalizing pagerank for word sense disambiguation. In Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics, pages 33–41. Eneko Agirre, Oier Lopez De Lacalle, Aitor Soroa, and Informatika Fakultatea. 2009. Knowledgebased wsd on specific domains: performing better than generic supervised wsd. In Proceedings of the 21st international jont conference on Artifical intelligence, pages 1501–1506. Eneko Agirre, Oier Lopez de Lacalle, Christiane Fellbaum, Shu-Kai Hsieh, Maurizio Tesconi, Monica Monachini, Piek Vossen, and Roxanne Segers. 2010. Semeval-2010 task 17: All-words word sense disambiguation on a specific domain. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 75–80. Ted Briscoe, John Carroll, and Rebecca Watson. 2006. The second release of the rasp system. In Proceedings of the COLING/ACL on Interactive presentation sessions, pages 77–80. Arthur Pentland Dempster, Nan McKenzie Laird, and Donald Bruce Rubin. 1977. Maximum likelihood from incomplete data via the em algorithm. Journal of the Royal Statistical Society. Series B (Methodological), pages 1–38. Zellig Sabbetai Harris. 1954. Distributional structure. Word. Jay J. Jiang and David W. Conrath. 1997. Semantic similarity based on corpus statistics and lexical taxonomy. arXiv preprint cmp-lg/9709008. Anup Kulkarni, Mitesh M. Khapra, Saurabh Sohoney, and Pushpak Bhattacharyya. 2010. CFILT: Resource conscious approaches for all-words domain specific. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 421–426. Michael Lesk. 1986. Automatic sense disambiguation using machine readable dictionaries: how to tell a pine cone from an ice cream cone. In Proceedings of the 5th annual international conference on Systems documentation, pages 24–26. Dekang Lin. 1998. Automatic retrieval and clustering of similar words. In Proceedings of the 17th international conference on Computational linguisticsVolume 2, pages 768–774. Diana McCarthy, Rob Koeling, Julie Weeds, and John Carroll. 2004. Finding predominant word senses in untagged text. In Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics, pages 279–286. Diana McCarthy, Rob Koeling, Julie Weeds, and John Carroll. 2007. Unsupervised acquisition of predominant word senses. Computational Linguistics, 33(4):553–590. Rada Mihalcea. 2005. Unsupervised large-vocabulary word sense disambiguation with graph-based algorithms for sequence data labeling. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, pages 411–418. Roberto Navigli and Mirella Lapata. 2007. Graph connectivity measures for unsupervised word sense disambiguation. In Proceedings of the 20th international joint conference on Artifical intelligence, pages 1683–1688. Roberto Navigli. 2009. Word sense disambiguation: A survey. ACM Computing Surveys (CSUR), 41(2):10. Zheng-Yu Niu, Dong-Hong Ji, and Chew Lim Tan. 2005. Word sense disambiguation using label propagation based semi-supervised learning. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 395–402. Emanuel Parzen. 1962. On estimation of a probability density function and mode. The annals of mathematical statistics, 33(3):1065–1076. Siddharth Patwardhan, Satanjeev Banerjee, and Ted Pedersen. 2007. UMND1: Unsupervised word sense disambiguation using contextual semantic relatedness. In proceedings of the 4th International Workshop on Semantic Evaluations, pages 390–393. Ted Pedersen, Siddharth Patwardhan, and Jason Michelizzi. 2004. WordNet::Similarity: measuring the relatedness of concepts. In Demonstration Papers at HLT-NAACL 2004, pages 38–41. Aitor Soroa, Eneko Agirre, Oier Lopez de Lacalle, Monica Monachini, Jessie Lo, Shu-Kai Hsieh, Wauter Bosma, and Piek Vossen. 2010. Kyoto: An integrated system for specific domain WSD. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 417–420. Andrew Tran, Chris Bowes, David Brown, Ping Chen, Max Choly, and Wei Ding. 2010. TreeMatch: A fully unsupervised WSD system using dependency knowledge on a specific domain. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 396–401. Jean V´eronis. 2004. HyperLex: lexical cartography for information retrieval. Computer Speech & Language, 18(3):223–252. 893
2013
87
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 894–904, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics The Role of Syntax in Vector Space Models of Compositional Semantics Karl Moritz Hermann and Phil Blunsom Department of Computer Science University of Oxford Oxford, OX1 3QD, UK {karl.moritz.hermann,phil.blunsom}@cs.ox.ac.uk Abstract Modelling the compositional process by which the meaning of an utterance arises from the meaning of its parts is a fundamental task of Natural Language Processing. In this paper we draw upon recent advances in the learning of vector space representations of sentential semantics and the transparent interface between syntax and semantics provided by Combinatory Categorial Grammar to introduce Combinatory Categorial Autoencoders. This model leverages the CCG combinatory operators to guide a non-linear transformation of meaning within a sentence. We use this model to learn high dimensional embeddings for sentences and evaluate them in a range of tasks, demonstrating that the incorporation of syntax allows a concise model to learn representations that are both effective and general. 1 Introduction Since Frege stated his ‘Principle of Semantic Compositionality’ in 1892 researchers have pondered both how the meaning of a complex expression is determined by the meanings of its parts, and how those parts are combined. (Frege, 1892; Pelletier, 1994). Over a hundred years on the choice of representational unit for this process of compositional semantics, and how these units combine, remain open questions. Frege’s principle may be debatable from a linguistic and philosophical standpoint, but it has provided a basis for a range of formal approaches to semantics which attempt to capture meaning in logical models. The Montague grammar (Montague, 1970) is a prime example for this, building a model of composition based on lambdacalculus and formal logic. More recent work in this field includes the Combinatory Categorial Grammar (CCG), which also places increased emphasis on syntactic coverage (Szabolcsi, 1989). Recently those searching for the right representation for compositional semantics have drawn inspiration from the success of distributional models of lexical semantics. This approach represents single words as distributional vectors, implying that a word’s meaning is a function of the environment it appears in, be that its syntactic role or co-occurrences with other words (Pereira et al., 1993; Sch¨utze, 1998). While distributional semantics is easily applied to single words, sparsity implies that attempts to directly extract distributional representations for larger expressions are doomed to fail. Only in the past few years has it been attempted to extend these representations to semantic composition. Most approaches here use the idea of vector-matrix composition to learn larger representations from single-word encodings (Baroni and Zamparelli, 2010; Grefenstette and Sadrzadeh, 2011; Socher et al., 2012b). While these models have proved very promising for compositional semantics, they make minimal use of linguistic information beyond the word level. In this paper we bridge the gap between recent advances in machine learning and more traditional approaches within computational linguistics. We achieve this goal by employing the CCG formalism to consider compositional structures at any point in a parse tree. CCG is attractive both for its transparent interface between syntax and semantics, and a small but powerful set of combinatory operators with which we can parametrise our nonlinear transformations of compositional meaning. We present a novel class of recursive models, the Combinatory Categorial Autoencoders (CCAE), which marry a semantic process provided by a recursive autoencoder with the syntactic representations of the CCG formalism. Through this model we seek to answer two ques894 tions: Can recursive vector space models be reconciled with a more formal notion of compositionality; and is there a role for syntax in guiding semantics in these types of models? CCAEs make use of CCG combinators and types by conditioning each composition function on its equivalent step in a CCG proof. In terms of learning complexity and space requirements, our models strike a balance between simpler greedy approaches (Socher et al., 2011b) and the larger recursive vector-matrix models (Socher et al., 2012b). We show that this combination of state of the art machine learning and an advanced linguistic formalism translates into concise models with competitive performance on a variety of tasks. In both sentiment and compound similarity experiments we show that our CCAE models match or better comparable recursive autoencoder models.1 2 Background There exist a number of formal approaches to language that provide mechanisms for compositionality. Generative Grammars (Jackendoff, 1972) treat semantics, and thus compositionality, essentially as an extension of syntax, with the generative (syntactic) process yielding a structure that can be interpreted semantically. By contrast Montague grammar achieves greater separation between the semantic and the syntactic by using lambda calculus to express meaning. However, this greater separation between surface form and meaning comes at a price in the form of reduced computability. While this is beyond the scope of this paper, see e.g. Kracht (2008) for a detailed analysis of compositionality in these formalisms. 2.1 Combinatory Categorial Grammar In this paper we focus on CCG, a linguistically expressive yet computationally efficient grammar formalism. It uses a constituency-based structure with complex syntactic types (categories) from which sentences can be deduced using a small number of combinators. CCG relies on combinatory logic (as opposed to lambda calculus) to build its expressions. For a detailed introduction and analysis vis-`a-vis other grammar formalisms see e.g. Steedman and Baldridge (2011). CCG has been described as having a transparent surface between the syntactic and the seman1A C++ implementation of our models is available at http://www.karlmoritz.com/ Tina likes tigers N (S[dcl]\NP)/NP N NP NP > S[dcl]\NP < S[dcl] Figure 1: CCG derivation for Tina likes tigers with forward (>) and backward application (<). tic. It is this property which makes it attractive for our purposes of providing a conditioning structure for semantic operators. A second benefit of the formalism is that it is designed with computational efficiency in mind. While one could debate the relative merits of various linguistic formalisms the existence of mature tools and resources, such as the CCGBank (Hockenmaier and Steedman, 2007), the Groningen Meaning Bank (Basile et al., 2012) and the C&C Tools (Curran et al., 2007) is another big advantage for CCG. CCG’s transparent surface stems from its categorial property: Each point in a derivation corresponds directly to an interpretable category. These categories (or types) associated with each term in a CCG govern how this term can be combined with other terms in a larger structure, implicitly making them semantically expressive. For instance in Figure 1, the word likes has type (S[dcl]\NP)/NP, which means that it first looks for a type NP to its right hand side. Subsequently the expression likes tigers (as type S[dcl]\NP) requires a second NP on its left. The final type of the phrase S[dcl] indicates a sentence and hence a complete CCG proof. Thus at each point in a CCG parse we can deduce the possible next steps in the derivation by considering the available types and combinatory rules. 2.2 Vector Space Models of Semantics Vector-based approaches for semantic tasks have become increasingly popular in recent years. Distributional representations encode an expression by its environment, assuming the contextdependent nature of meaning according to which one “shall know a word by the company it keeps” (Firth, 1957). Effectively this is usually achieved by considering the co-occurrence with other words in large corpora or the syntactic roles a word performs. Distributional representations are frequently used to encode single words as vectors. Such rep895 resentations have then successfully been applied to a number of tasks including word sense disambiguation (Sch¨utze, 1998) and selectional preference (Pereira et al., 1993; Lin, 1999). While it is theoretically possible to apply the same mechanism to larger expressions, sparsity prevents learning meaningful distributional representations for expressions much larger than single words.2 Vector space models of compositional semantics aim to fill this gap by providing a methodology for deriving the representation of an expression from those of its parts. While distributional representations frequently serve to encode single words in such approaches this is not a strict requirement. There are a number of ideas on how to define composition in such vector spaces. A general framework for semantic vector composition was proposed in Mitchell and Lapata (2008), with Mitchell and Lapata (2010) and more recently Blacoe and Lapata (2012) providing good overviews of this topic. Notable approaches to this issue include Baroni and Zamparelli (2010), who compose nouns and adjectives by representing them as vectors and matrices, respectively, with the compositional representation achieved by multiplication. Grefenstette and Sadrzadeh (2011) use a similar approach with matrices for relational words and vectors for arguments. These two approaches are combined in Grefenstette et al. (2013), producing a tensor-based semantic framework with tensor contraction as composition operation. Another set of models that have very successfully been applied in this area are recursive autoencoders (Socher et al., 2011a; Socher et al., 2011b), which are discussed in the next section. 2.3 Recursive Autoencoders Autoencoders are a useful tool to compress information. One can think of an autoencoder as a funnel through which information has to pass (see Figure 2). By forcing the autoencoder to reconstruct an input given only the reduced amount of information available inside the funnel it serves as a compression tool, representing highdimensional objects in a lower-dimensional space. Typically a given autoencoder, that is the functions for encoding and reconstructing data, are 2The experimental setup in (Baroni and Zamparelli, 2010) is one of the few examples where distributional representations are used for word pairs. Figure 2: A simple three-layer autoencoder. The input represented by the vector at the bottom is being encoded in a smaller vector (middle), from which it is then reconstructed (top) into the same dimensionality as the original input vector. used on multiple inputs. By optimizing the two functions to minimize the difference between all inputs and their respective reconstructions, this autoencoder will effectively discover some hidden structures within the data that can be exploited to represent it more efficiently. As a simple example, assume input vectors xi ∈Rn, i ∈(0..N), weight matrices W enc ∈ R(m×n), W rec ∈R(n×m) and biases benc ∈Rm, brec ∈Rn. The encoding matrix and bias are used to create an encoding ei from xi: ei = fenc(xi) = W encxi + benc (1) Subsequently e ∈Rm is used to reconstruct x as x′ using the reconstruction matrix and bias: x′ i = frec(ei) = W recei + brec (2) θ = (W enc, W rec, benc, brec) can then be learned by minimizing the error function describing the difference between x′ and x: E = 1 2 N X i x′ i −xi 2 (3) Now, if m < n, this will intuitively lead to ei encoding a latent structure contained in xi and shared across all xj, j ∈(0..N), with θ encoding and decoding to and from that hidden structure. It is possible to apply multiple autoencoders on top of each other, creating a deep autoencoder (Bengio et al., 2007; Hinton and Salakhutdinov, 2006). For such a multi-layered model to learn anything beyond what a single layer could learn, a non-linear transformation g needs to be applied at each layer. Usually, a variant of the sigmoid (σ) 896 Figure 3: RAE with three inputs. Vectors with filled (blue) circles represent input and hidden units; blanks (white) denote reconstruction layers. or hyperbolic tangent (tanh) function is used for g (LeCun et al., 1998). fenc(xi) = g (W encxi + benc) (4) frec(ei) = g (W recei + brec) Furthermore, autoencoders can easily be used as a composition function by concatenating two input vectors, such that: e = f(x1, x2) = g (W(x1∥x2) + b) (5) (x′ 1∥x′ 2) = g W ′e + b′ Extending this idea, recursive autoencoders (RAE) allow the modelling of data of variable size. By setting the n = 2m, it is possible to recursively combine a structure into an autoencoder tree. See Figure 3 for an example, where x1, x2, x3 are recursively encoded into y2. The recursive application of autoencoders was first introduced in Pollack (1990), whose recursive auto-associative memories learn vector representations over pre-specified recursive data structures. More recently this idea was extended and applied to dynamic structures (Socher et al., 2011b). These types of models have become increasingly prominent since developments within the field of Deep Learning have made the training of such hierarchical structures more effective and tractable (LeCun et al., 1998; Hinton et al., 2006). Intuitively the top layer of an RAE will encode aspects of the information stored in all of the input vectors. Previously, RAE have successfully been applied to a number of tasks including sentiment analysis, paraphrase detection, relation extraction Model CCG Elements CCAE-A parse CCAE-B parse + rules CCAE-C parse + rules + types CCAE-D parse + rules + child types Table 1: Aspects of the CCG formalism used by the different models explored in this paper. and 3D object identification (Blacoe and Lapata, 2012; Socher et al., 2011b; Socher et al., 2012a). 3 Model The models in this paper combine the power of recursive, vector-based models with the linguistic intuition of the CCG formalism. Their purpose is to learn semantically meaningful vector representations for sentences and phrases of variable size, while the purpose of this paper is to investigate the use of syntax and linguistic formalisms in such vector-based compositional models. We assume a CCG parse to be given. Let C denote the set of combinatory rules, and T the set of categories used, respectively. We use the parse tree to structure an RAE, so that each combinatory step is represented by an autoencoder function. We refer to these models Categorial Combinatory Autoencoders (CCAE). In total this paper describes four models making increasing use of the CCG formalism (see table 1). As an internal baseline we use model CCAEA, which is an RAE structured along a CCG parse tree. CCAE-A uses a single weight matrix each for the encoding and reconstruction step (see Table 2. This model is similar to Socher et al. (2011b), except that we use a fixed structure in place of the greedy tree building approach. As CCAE-A uses only minimal syntactic guidance, this should allow us to better ascertain to what degree the use of syntax helps our semantic models. Our second model (CCAE-B) uses the composition function in equation (6), with c ∈C. fenc(x, y, c) = g (W c enc(x∥y) + bc enc) (6) frec(e, c) = g (W c rece + bc rec) This means that for every combinatory rule we define an equivalent autoencoder composition function by parametrizing both the weight matrix and bias on the combinatory rule (e.g. Figure 4). In this model, as in the following ones, we assume a reconstruction step symmetric with the 897 Model Encoding Function CCAE-A f(x, y)= g (W(x∥y) + b) CCAE-B f(x, y, c)= g (W c(x∥y) + bc) CCAE-C f(x, y, c, t)= g P p∈{c,t} (W p(x∥y) + bp)  CCAE-D f(x, y, c, tx, ty)= g W c W txx + W tyy  + bc Table 2: Encoding functions of the four CCAE models discussed in this paper. α : X/Y β : Y > αβ : X g (W > enc(α∥β) + b> enc) Figure 4: Forward application as CCG combinator and autoencoder rule respectively. Figure 5: CCAE-B applied to Tina likes tigers. Next to each vector are the CCG category (top) and the word or function representing it (bottom). lex describes the unary type-changing operation. > and < are forward and backward application. composition step. For the remainder of this paper we will focus on the composition step and drop the use of enc and rec in variable names where it isn’t explicitly required. Figure 5 shows model CCAEB applied to our previous example sentence. While CCAE-B uses only the combinatory rules, we want to make fuller use of the linguistic information available in CCG. For this purpose, we build another model CCAE-C, which parametrizes on both the combinatory rule c ∈C and the CCG category t ∈T at every step (see Figure 2). This model provides an additional degree of insight, as the categories T are semantically and syntactically more expressive than the CCG combinatory rules by themselves. Summing over weights parametrised on c and t respectively, adds an additional degree of freedom and also allows for some model smoothing. An alternative approach is encoded in model CCAE-D. Here we consider the categories not of the element represented, but of the elements it is generated from together with the combinatory rule applied to them. The intuition is that in the first step we transform two expressions based on their syntax. Subsequently we combine these two conditioned on their joint combinatory rule. 4 Learning In this section we briefly discuss unsupervised learning for our models. Subsequently we describe how these models can be extended to allow for semi-supervised training and evaluation. Let θ = (W, B, L) be our model parameters and λ a vector with regularization parameters for all model parameters. W represents the set of all weight matrices, B the set of all biases and L the set of all word vectors. Let N be the set of training data consisting of tree-nodes n with inputs xn, yn and reconstruction rn. The error given n is: E(n|θ) = 1 2 rn −(xn∥yn) 2 (7) The gradient of the regularised objective function then becomes: ∂J ∂θ = 1 N N X n ∂E(n|θ) ∂θ + λθ (8) We learn the gradient using backpropagation through structure (Goller and K¨uchler, 1996), and minimize the objective function using L-BFGS. For more details about the partial derivatives used for backpropagation, see the documentation accompanying our model implementation.3 3http://www.karlmoritz.com/ 898 4.1 Supervised Learning The unsupervised method described so far learns a vector representation for each sentence. Such a representation can be useful for some tasks such as paraphrase detection, but is not sufficient for other tasks such as sentiment classification, which we are considering in this paper. In order to extract sentiment from our models, we extend them by adding a supervised classifier on top, using the learned representations v as input for a binary classification model: pred(l=1|v, θ) = sigmoid(Wlabel v + blabel) (9) Given our corpus of CCG parses with label pairs (N, l), the new objective function becomes: J = 1 N X (N,l) E(N, l, θ) + λ 2 ||θ||2 (10) Assuming each node n ∈N contains children xn, yn, encoding en and reconstruction rn, so that n = {x, y, e, r} this breaks down into: E(N, l, θ) = (11) X n∈N αErec (n, θ) + (1−α)Elbl(en, l, θ) Erec(n, θ) = 1 2 [xn∥yn] −rn 2 (12) Elbl(e, l, θ) = 1 2 ∥l −e∥2 (13) This method of introducing a supervised aspect to the autoencoder largely follows the model described in Socher et al. (2011b). 5 Experiments We describe a number of standard evaluations to determine the comparative performance of our model. The first task of sentiment analysis allows us to compare our CCG-conditioned RAE with similar, existing models. In a second experiment, we apply our model to a compound similarity evaluation, which allows us to evaluate our models against a larger class of vector-based models (Blacoe and Lapata, 2012). We conclude with some qualitative analysis to get a better idea of whether the combination of CCG and RAE can learn semantically expressive embeddings. In our experiments we use the hyperbolic tangent as nonlinearity g. Unless stated otherwise we use word-vectors of size 50, initialized using the embeddings provided by Turian et al. (2010) based on the model of Collobert and Weston (2008).4 We use the C&C parser (Clark and Curran, 2007) to generate CCG parse trees for the data used in our experiments. For models CCAE-C and CCAE-D we use the 25 most frequent CCG categories (as extracted from the British National Corpus) with an additional general weight matrix in order to catch all remaining types. 5.1 Sentiment Analysis We evaluate our model on the MPQA opinion corpus (Wiebe et al., 2005), which annotates expressions for sentiment.5 The corpus consists of 10,624 instances with approximately 70 percent describing a negative sentiment. We apply the same pre-processing as (Nakagawa et al., 2010) and (Socher et al., 2011b) by using an additional sentiment lexicon (Wilson et al., 2005) during the model training for this experiment. As a second corpus we make use of the sentence polarity (SP) dataset v1.0 (Pang and Lee, 2005).6 This dataset consists of 10662 sentences extracted from movie reviews which are manually labelled with positive or negative sentiment and equally distributed across sentiment. Experiment 1: Semi-Supervised Training In the first experiment, we use the semi-supervised training strategy described previously and initialize our models with the embeddings provided by Turian et al. (2010). The results of this evaluation are in Table 3. While we achieve the best performance on the MPQA corpus, the results on the SP corpus are less convincing. Perhaps surprisingly, the simplest model CCAE-A outperforms the other models on this dataset. When considering the two datasets, sparsity seems a likely explanation for this difference in results: In the MPQA experiment most instances are very short with an average length of 3 words, while the average sentence length in the SP corpus is 21 words. The MPQA task is further simplified through the use or an additional sentiment lexicon. Considering dictionary size, the SP corpus has a dictionary of 22k words, more than three times the size of the MPQA dictionary. 4http://www.metaoptimize.com/projects/ wordreprs/ 5http://mpqa.cs.pitt.edu/ 6http://www.cs.cornell.edu/people/ pabo/movie-review-data/ 899 Method MPQA SP Voting with two lexica 81.7 63.1 MV-RNN (Socher et al., 2012b) 79.0 RAE (rand) (Socher et al., 2011b) 85.7 76.8 TCRF (Nakagawa et al., 2010) 86.1 77.3 RAE (init) (Socher et al., 2011b) 86.4 77.7 NB (Wang and Manning, 2012) 86.7 79.4 CCAE-A 86.3 77.8 CCAE-B 87.1 77.1 CCAE-C 87.1 77.3 CCAE-D 87.2 76.7 Table 3: Accuracy of sentiment classification on the sentiment polarity (SP) and MPQA datasets. For NB we only display the best result among a larger group of models analysed in that paper. This issue of sparsity is exacerbated in the more complex CCAE models, where the training points are spread across different CCG types and rules. While the initialization of the word vectors with previously learned embeddings (as was previously shown by Socher et al. (2011b)) helps the models, all other model variables such as composition weights and biases are still initialised randomly and thus highly dependent on the amount of training data available. Experiment 2: Pretraining Due to our analysis of the results of the initial experiment, we ran a second series of experiments on the SP corpus. We follow (Scheible and Sch¨utze, 2013) for this second series of experiments, which are carried out on a random 90/10 training-testing split, with some data reserved for development. Instead of initialising the model with external word embeddings, we first train it on a large amount of data with the aim of overcoming the sparsity issues encountered in the previous experiment. Learning is thus divided into two steps: The first, unsupervised training phase, uses the British National Corpus together with the SP corpus. In this phase only the reconstruction signal is used to learn word embeddings and transformation matrices. Subsequently, in the second phase, only the SP corpus is used, this time with both the reconstruction and the label error. By learning word embeddings and composition matrices on more data, the model is likely to generalise better. Particularly for the more complex models, where the composition functions are conditioned on various CCG parameters, this should Training Model Regular Pretraining CCAE-A 77.8 79.5 CCAE-B 76.9 79.8 CCAE-C 77.1 81.0 CCAE-D 76.9 79.7 Table 4: Effect of pretraining on model performance on the SP dataset. Results are reported on a random subsection of the SP corpus; thus numbers for the regular training method differ slightly from those in Table 3. help to overcome issues of sparsity. If we consider the results of the pre-trained experiments in Table 4, this seems to be the case. In fact, the trend of the previous results has been reversed, with the more complex models now performing best, whereas in the previous experiments the simpler models performed better. Using the Turian embeddings instead of random initialisation did not improve results in this setup. 5.2 Compound Similarity In a second experiment we use the dataset from Mitchell and Lapata (2010) which contains similarity judgements for adjective-noun, noun-noun and verb-object pairs.7 All compound pairs have been ranked for semantic similarity by a number of human annotators. The task is thus to rank these pairs of word pairs by their semantic similarity. For instance, the two compounds vast amount and large quantity are given a high similarity score by the human judges, while northern region and early age are assigned no similarity at all. We train our models as fully unsupervised autoencoders on the British National Corpus for this task. We assume fixed parse trees for all of the compounds (Figure 6), and use these to compute compound level vectors for all word pairs. We subsequently use the cosine distance between each compound pair as our similarity measure. We use Spearman’s rank correlation coefficient (ρ) for evaluation; hence there is no need to rescale our scores (-1.0 – 1.0) to the original scale (1.0 – 7.0). Blacoe and Lapata (2012) have an extensive comparison of the performance of various vectorbased models on this data set to which we compare our model in Table 5. The CCAE models outper7http://homepages.inf.ed.ac.uk/mlap/ resources/index.html 900 Verb Object VB NN (S\NP)/NP N NP > S\NP Noun Noun NN NN N/N N > N Adjective Noun JJ NN N/N N > N Figure 6: Assumed CCG parse structure for the compound similarity evaluation. Method Adj-N N-N V-Obj Human 0.52 0.49 0.55 (Blacoe and Lapata, 2012) ⊙/+ 0.21 - 0.48 0.22 - 0.50 0.18 - 0.35 RAE 0.19 - 0.31 0.24 - 0.30 0.09 - 0.28 CCAE-B 0.38 0.44 0.34 CCAE-C 0.38 0.41 0.23 CCAE-D 0.41 0.44 0.29 Table 5: Correlation coefficients of model predictions for the compound similarity task. Numbers show Spearman’s rank correlation coefficient (ρ). Higher numbers indicate better correlation. form the RAE models provided by Blacoe and Lapata (2012), and score towards the upper end of the range of other models considered in that paper. 5.3 Qualitative Analysis To get better insight into our models we also perform a small qualitative analysis. Using one of the models trained on the MPQA corpus, we generate word-level representations of all phrases in this corpus and subsequently identify the most related expressions by using the cosine distance measure. We perform this experiment on all expressions of length 5, considering all expressions with a word length between 3 and 7 as potential matches. As can be seen in Table 6, this works with varying success. Linking expressions such as conveying the message of peace and safeguard(ing) peace and security suggests that the model does learn some form of semantics. On the other hand, the connection between expressed their satisfaction and support and expressed their admiration and surprise suggests that the pure word level content still has an impact on the model analysis. Likewise, the expressions is a story of success and is a staunch supporter have some lexical but little semantic overlap. Further reducing this link between the lexical and the semantic representation is an issue that should be addressed in future work in this area. 6 Discussion Overall, our models compare favourably with the state of the art. On the MPQA corpus model CCAE-D achieves the best published results we are aware of, whereas on the SP corpus we achieve competitive results. With an additional, unsupervised training step we achieved results beyond the current state of the art on this task, too. Semantics The qualitative analysis and the experiment on compounds demonstrate that the CCAE models are capable of learning semantics. An advantage of our approach—and of autoencoders generally—is their ability to learn in an unsupervised setting. The pre-training step for the sentiment task was essentially the same training step as used in the compound similarity task. While other models such as the MV-RNN (Socher et al., 2012b) achieve good results on a particular task, they do not allow unsupervised training. This prevents the possiblity of pretraining, which we showed to have a big impact on results, and further prevents the training of general models: The CCAE models can be used for multiple tasks without the need to re-train the main model. Complexity Previously in this paper we argued that our models combined the strengths of other approaches. By using a grammar formalism we increase the expressive power of the model while the complexity remains low. For the complexity analysis see Table 7. We strike a balance between the greedy approaches (e.g. Socher et al. (2011b)), where learning is quadratic in the length of each sentence and existing syntax-driven approaches such as the MV-RNN of Socher et al. (2012b), where the size of the model, that is the number of variables that needs to be learned, is quadratic in the size of the word-embeddings. Sparsity Parametrizing on CCG types and rules increases the size of the model compared to a greedy RAE (Socher et al., 2011b). The effect of this was highlighted by the sentiment analysis task, with the more complex models performing 901 Expression Most Similar convey the message of peace safeguard peace and security keep alight the flame of keep up the hope has a reason to repent has no right a significant and successful strike a much better position it is reassuring to believe it is a positive development expressed their satisfaction and support expressed their admiration and surprise is a story of success is a staunch supporter are lining up to condemn are going to voice their concerns more sanctions should be imposed charges being leveled could fray the bilateral goodwill could cause serious damage Table 6: Phrases from the MPQA corpus and their semantically closest match according to CCAE-D. Complexity Model Size Learning MV-RNN O(nw2) O(l) RAE O(nw) O(l2) CCAE-* O(nw) O(l) Table 7: Comparison of models. n is dictionary size, w embedding width, l is sentence length. We can assume l ≫n ≫w. Additional factors such as CCG rules and types are treated as small constants for the purposes of this analysis. worse in comparison with the simpler ones. We were able to overcome this issue by using additional training data. Beyond this, it would also be interesting to investigate the relationships between different types and to derive functions to incorporate this into the learning procedure. For instance model learning could be adjusted to enforce some mirroring effects between the weight matrices of forward and backward application, or to support similarities between those of forward application and composition. CCG-Vector Interface Exactly how the information contained in a CCG derivation is best applied to a vector space model of compositionality is another issue for future research. Our investigation of this matter by exploring different model setups has proved somewhat inconclusive. While CCAE-D incorporated the deepest conditioning on the CCG structure, it did not decisively outperform the simpler CCAE-B which just conditioned on the combinatory operators. Issues of sparsity, as shown in our experiments on pretraining, have a significant influence, which requires further study. 7 Conclusion In this paper we have brought a more formal notion of semantic compositionality to vector space models based on recursive autoencoders. This was achieved through the use of the CCG formalism to provide a conditioning structure for the matrix vector products that define the RAE. We have explored a number of models, each of which conditions the compositional operations on different aspects of the CCG derivation. Our experimental findings indicate a clear advantage for a deeper integration of syntax over models that use only the bracketing structure of the parse tree. The most effective way to condition the compositional operators on the syntax remains unclear. Once the issue of sparsity had been addressed, the complex models outperformed the simpler ones. Among the complex models, however, we could not establish significant or consistent differences to convincingly argue for a particular approach. While the connections between formal linguistics and vector space approaches to NLP may not be immediately obvious, we believe that there is a case for the continued investigation of ways to best combine these two schools of thought. This paper represents one step towards the reconciliation of traditional formal approaches to compositional semantics with modern machine learning. Acknowledgements We thank the anonymous reviewers for their feedback and Richard Socher for providing additional insight into his models. Karl Moritz would further like to thank Sebastian Riedel for hosting him at UCL while this paper was written. This work has been supported by the EPSRC. 902 References Marco Baroni and Roberto Zamparelli. 2010. Nouns are vectors, adjectives are matrices: Representing adjective-noun constructions in semantic space. In Proceedings of EMNLP, pages 1183–1193. Valerio Basile, Johan Bos, Kilian Evang, and Noortje Venhuizen. 2012. Developing a large semantically annotated corpus. In Proceedings of LREC, pages 3196–3200, Istanbul, Turkey. Yoshua Bengio, Pascal Lamblin, Dan Popovici, and Hugo Larochelle. 2007. Greedy layer-wise training of deep networks. In Advances in Neural Information Processing Systems 19, pages 153–160. William Blacoe and Mirella Lapata. 2012. A comparison of vector-based representations for semantic composition. In Proceedings of EMNLP-CoNLL, pages 546–556. Stephen Clark and James R. Curran. 2007. Widecoverage efficient statistical parsing with ccg and log-linear models. CL, 33(4):493–552, December. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of ICML. James Curran, Stephen Clark, and Johan Bos. 2007. Linguistically motivated large-scale nlp with c&c and boxer. In Proceedings of ACL Demo and Poster Sessions, pages 33–36. J. R. Firth. 1957. A synopsis of linguistic theory 193055. 1952-59:1–32. Gottfried Frege. 1892. ¨Uber Sinn und Bedeutung. In Mark Textor, editor, Funktion - Begriff - Bedeutung, volume 4 of Sammlung Philosophie. Vandenhoeck & Ruprecht, G¨ottingen. Christoph Goller and Andreas K¨uchler. 1996. Learning task-dependent distributed representations by backpropagation through structure. In Proceedings of the ICNN-96, pages 347–352. IEEE. Edward Grefenstette and Mehrnoosh Sadrzadeh. 2011. Experimental support for a categorical compositional distributional model of meaning. In Proceedings of EMNLP, pages 1394–1404. Edward Grefenstette, Georgiana Dinu, Yao-Zhong Zhang, Mehrnoosh Sadrzadeh, and Marco Baroni. 2013. Multi-step regression learning for compositional distributional semantics. G. E. Hinton and R. R. Salakhutdinov. 2006. Reducing the dimensionality of data with neural networks. Science, 313(5786):504–507. Geoffrey E. Hinton, Simon Osindero, Max Welling, and Yee Whye Teh. 2006. Unsupervised discovery of nonlinear structure using contrastive backpropagation. Cognitive Science, 30(4):725–731. Julia Hockenmaier and Mark Steedman. 2007. CCGbank: A Corpus of CCG Derivations and Dependency Structures Extracted from the Penn Treebank. CL, 33(3):355–396, September. Ray Jackendoff. 1972. Semantic Interpretation in Generative Grammar. MIT Press, Cambridge, MA. Marcus Kracht. 2008. Compositionality in Montague Grammar. In Edouard Machery und Markus Werning Wolfram Hinzen, editor, Handbook of Compositionality, pages 47 – 63. Oxford University Press. Yann LeCun, Leon Bottou, Genevieve Orr, and KlausRobert Muller. 1998. Efficient backprop. In G. Orr and Muller K., editors, Neural Networks: Tricks of the trade. Springer. Dekang Lin. 1999. Automatic identification of noncompositional phrases. In Proceedings of ACL, pages 317–324. Jeff Mitchell and Mirella Lapata. 2008. Vector-based models of semantic composition. In In Proceedings of ACL, pages 236–244. Jeff Mitchell and Mirella Lapata. 2010. Composition in distributional models of semantics. Cognitive Science, 34(8):1388–1429. R. Montague. 1970. Universal grammar. Theoria, 36(3):373–398. Tetsuji Nakagawa, Kentaro Inui, and Sadao Kurohashi. 2010. Dependency tree-based sentiment classification using crfs with hidden variables. In NAACLHLT, pages 786–794. Bo Pang and Lillian Lee. 2005. Seeing stars: exploiting class relationships for sentiment categorization with respect to rating scales. In Proceedings of ACL, pages 115–124. Francis Jeffry Pelletier. 1994. The principle of semantic compositionality. Topoi, 13:11–24. Fernando Pereira, Naftali Tishby, and Lillian Lee. 1993. Distributional clustering of english words. In Proceedings of ACL, ACL ’93, pages 183–190. Jordan B. Pollack. 1990. Recursive distributed representations. Artificial Intelligence, 46:77–105. Christian Scheible and Hinrich Sch¨utze. 2013. Cutting recursive autoencoder trees. In Proceedings of the International Conference on Learning Representations. Hinrich Sch¨utze. 1998. Automatic word sense discrimination. CL, 24(1):97–123, March. Richard Socher, Eric H. Huang, Jeffrey Pennington, Andrew Y. Ng, and Christopher D. Manning. 2011a. Dynamic Pooling and Unfolding Recursive Autoencoders for Paraphrase Detection. In Advances in Neural Information Processing Systems 24. 903 Richard Socher, Jeffrey Pennington, Eric H. Huang, Andrew Y. Ng, and Christopher D. Manning. 2011b. Semi-supervised recursive autoencoders for predicting sentiment distributions. In Proceedings of EMNLP, pages 151–161. Richard Socher, Brody Huval, Bharath Bhat, Christopher D. Manning, and Andrew Y. Ng. 2012a. Convolutional-Recursive Deep Learning for 3D Object Classification. In Advances in Neural Information Processing Systems 25. Richard Socher, Brody Huval, Christopher D. Manning, and Andrew Y. Ng. 2012b. Semantic compositionality through recursive matrix-vector spaces. In Proceedings of EMNLP-CoNLL, pages 1201– 1211. Mark Steedman and Jason Baldridge, 2011. Combinatory Categorial Grammar, pages 181–224. WileyBlackwell. Anna Szabolcsi. 1989. Bound Variables in Syntax: Are There Any? In Renate Bartsch, Johan van Benthem, and Peter van Emde Boas, editors, Semantics and Contextual Expression, pages 295–318. Foris, Dordrecht. Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: a simple and general method for semi-supervised learning. In Proceedings of ACL, pages 384–394. Sida Wang and Christopher D. Manning. 2012. Baselines and bigrams: simple, good sentiment and topic classification. In Proceedings of ACL, pages 90–94. Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005. Annotating expressions of opinions and emotions in language. Language Resources and Evaluation, 39(2-3):165–210. Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing contextual polarity in phraselevel sentiment analysis. In Proceedings of EMNLPHLT, HLT ’05, pages 347–354. 904
2013
88
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 905–913, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Margin-based Decomposed Amortized Inference Gourab Kundu∗and Vivek Srikumar∗and Dan Roth University of Illinois, Urbana-Champaign Urbana, IL. 61801 {kundu2, vsrikum2, danr}@illinois.edu Abstract Given that structured output prediction is typically performed over entire datasets, one natural question is whether it is possible to re-use computation from earlier inference instances to speed up inference for future instances. Amortized inference has been proposed as a way to accomplish this. In this paper, first, we introduce a new amortized inference algorithm called the Margin-based Amortized Inference, which uses the notion of structured margin to identify inference problems for which previous solutions are provably optimal. Second, we introduce decomposed amortized inference, which is designed to address very large inference problems, where earlier amortization methods become less effective. This approach works by decomposing the output structure and applying amortization piece-wise, thus increasing the chance that we can re-use previous solutions for parts of the output structure. These parts are then combined to a global coherent solution using Lagrangian relaxation. In our experiments, using the NLP tasks of semantic role labeling and entityrelation extraction, we demonstrate that with the margin-based algorithm, we need to call the inference engine only for a third of the test examples. Further, we show that the decomposed variant of margin-based amortized inference achieves a greater reduction in the number of inference calls. 1 Introduction A wide variety of NLP problems can be naturally cast as structured prediction problems. For * These authors contributed equally to this work. some structures like sequences or parse trees, specialized and tractable dynamic programming algorithms have proven to be very effective. However, as the structures under consideration become increasingly complex, the computational problem of predicting structures can become very expensive, and in the worst case, intractable. In this paper, we focus on an inference technique called amortized inference (Srikumar et al., 2012), where previous solutions to inference problems are used to speed up new instances. The main observation that leads to amortized inference is that, very often, for different examples of the same size, the structures that maximize the score are identical. If we can efficiently identify that two inference problems have the same solution, then we can re-use previously computed structures for newer examples, thus giving us a speedup. This paper has two contributions. First, we describe a novel algorithm for amortized inference called margin-based amortization. This algorithm is on an examination of the structured margin of a prediction. For a new inference problem, if this margin is larger than the sum of the decrease in the score of the previous prediction and any increase in the score of the second best one, then the previous solution will be the highest scoring one for the new problem. We formalize this intuition to derive an algorithm that finds provably optimal solutions and show that this approach is a generalization of previously identified schemes (based on Theorem 1 of (Srikumar et al., 2012)). Second, we argue that the idea of amortization is best exploited at the level of parts of the structures rather than the entire structure because we expect a much higher redundancy in the parts. We introduce the notion of decomposed amortized inference, whereby we can attain a significant improvement in speedup by considering repeated sub-structures across the dataset and applying any amortized inference algorithm for the parts. 905 We evaluate the two schemes and their combination on two NLP tasks where the output is encoded as a structure: PropBank semantic role labeling (Punyakanok et al., 2008) and the problem of recognizing entities and relations in text (Roth and Yih, 2007; Kate and Mooney, 2010). In these problems, the inference problem has been framed as an integer linear program (ILP). We compare our methods with previous amortized inference methods and show that margin-based amortization combined with decomposition significantly outperforms existing methods. 2 Problem Definition and Notation Structured output prediction encompasses a wide variety of NLP problems like part-of-speech tagging, parsing and machine translation. The language of 0-1 integer linear programs (ILP) provides a convenient analytical tool for representing structured prediction problems. The general setting consists of binary inference variables each of which is associated with a score. The goal of inference is to find the highest scoring global assignment of the variables from a feasible set of assignments, which is defined by linear inequalities. While efficient inference algorithms exist for special families of structures (like linear chains and trees), in the general case, inference can be computationally intractable. One approach to deal with the computational complexity of inference is to use an off-the-shelf ILP solver for solving the inference problem. This approach has seen increasing use in the NLP community over the last several years (for example, (Roth and Yih, 2004; Clarke and Lapata, 2006; Riedel and Clarke, 2006) and many others). Other approaches for solving inference include the use of cutting plane inference (Riedel, 2009), dual decomposition (Koo et al., 2010; Rush et al., 2010) and the related method of Lagrangian relaxation (Rush and Collins, 2011; Chang and Collins, 2011). (Srikumar et al., 2012) introduced the notion of an amortized inference algorithm, defined as an inference algorithm that can use previous predictions to speed up inference time, thereby giving an amortized gain in inference time over the lifetime of the program. The motivation for amortized inference comes from the observation that though the number of possible structures could be large, in practice, only a small number of these are ever seen in real data. Furthermore, among the observed structures, a small subset typically occurs much more frequently than the others. Figure 1 illustrates this observation in the context of part-of-speech tagging. If we can efficiently characterize and identify inference instances that have the same solution, we can take advantage of previously performed computation without paying the high computational cost of inference. Figure 1: Comparison of number of instances and the number of unique observed part-of-speech structures in the Gigaword corpus. Note that the number of observed structures (blue solid line) is much lower than the number of sentences (red dotted line) for all sentence lengths, with the difference being very pronounced for shorter sentences. Embedded in the graph are three histograms that show the distribution of observed structures for sentences of length 15, 20 and 30. In all cases, we see that a small number of tag sequences are much more frequent than the others. We denote inference problems by the boldfaced letters p and q. For a problem p, the goal of inference is to jointly assign values to the parts of the structure, which are represented by a collection of inference variables y ∈{0, 1}n. For all vectors, subscripts represent their ith component. Each yi is associated with a real valued cp,i ∈ℜ which is the score for the variable yi being assigned the value 1. We denote the vector comprising of all the cp,i as cp. The search space for assignments is restricted via constraints, which can be written as a collection of linear inequalities, MT y ≤b. For a problem p, we denote this feasible set of structures by Kp. The inference problem is that of finding the feasible assignment to the structure which maximizes the dot product cT y. Thus, the prediction problem can be written as arg max y∈Kp cT y. (1) 906 We denote the solution of this maximization problem as yp. Let the set P = {p1, p2, · · · } denote previously solved inference problems, along with their respective solutions {y1 p, y2 p, · · · }. An equivalence class of integer linear programs, denoted by [P], consists of ILPs which have the same number of inference variables and the same feasible set. Let K[P] denote the feasible set of an equivalence class [P]. For a program p, the notation p ∼[P] indicates that it belongs to the equivalence class [P]. (Srikumar et al., 2012) introduced a set of amortized inference schemes, each of which provides a condition for a new ILP to have the same solution as a previously seen problem. We will briefly review one exact inference scheme introduced in that work. Suppose q belongs to the same equivalence class of ILPs as p. Then the solution to q will be the same as that of p if the following condition holds for all inference variables: (2yp,i −1)(cq,i −cp,i) ≥0. (2) This condition, referred to as Theorem 1 in that work, is the baseline for our experiments. In general, for any amortization scheme A, we can define two primitive operators TESTCONDITIONA and SOLUTIONA. Given a collection of previously solved problems P and a new inference problem q, TESTCONDITIONA(P, q) checks if the solution of the new problem is the same as that of some previously solved one and if so, SOLUTIONA(P, q) returns the solution. 3 Margin-based Amortization In this section, we will introduce a new method for amortizing inference costs over time. The key observation that leads to this theorem stems from the structured margin δ for an inference problem p ∼[P], which is defined as follows: δ = min y∈K[P ],y̸=yp cT p(yp −y). (3) That is, for all feasible y, we have cT pyp ≥cT py + δ. The margin δ is the upper limit on the change in objective that is allowed for the constraint set K[P] for which the solution will not change. For a new inference problem q ∼[P], we define ∆as the maximum change in objective value that can be effected by an assignment that is not the A B = yp cp cq δ ∆ decrease in value of yp increasing objective cpT yp Two assignments Figure 2: An illustration of the margin-based amortization scheme showing the very simple case with only two competing assignments A and B. Suppose B is the solution yp for the inference problem p with coefficients cp, denoted by the red hyperplane, and A is the second-best assignment. For a new coefficient vector cq, if the margin δ is greater than the sum of the decrease in the objective value of yp and the maximum increase in the objective of another solution (∆), then the solution to the new inference problem will still be yp. The margin-based amortization theorem captures this intuition. solution. That is, ∆= max y∈K[P ],y̸=yp (cq −cp)T y (4) Before stating the theorem, we will provide an intuitive explanation for it. Moving from cp to cq, consider the sum of the decrease in the value of the objective for the solution yp and ∆, the maximum change in objective value for an assignment that is not the solution. If this sum is less than the margin δ, then no other solution will have an objective value higher than yp. Figure 2 illustrates this using a simple example where there are only two competing solutions. This intuition is captured by our main theorem which provides a condition for problems p and q to have the same solution yp. Theorem 1 (Margin-based Amortization). Let p denote an inference problem posed as an integer linear program belonging to an equivalence class [P] with optimal solution yp. Let p have a structured margin δ, i.e., for any y, we have cT pyp ≥cT py + δ. Let q ∼[P] be another inference instance in the same equivalence class and let ∆be defined as in Equation 4. Then, yp is the solution of the problem q if the following holds: −(cq −cp)T yp + ∆≤δ (5) 907 Proof. For some feasible y, we have cT qyp −cT qy ≥ cT qyp −cT py −∆ ≥ cT qyp −cT pyp + δ −∆ ≥ 0 The first inequality comes from the definition of ∆ in (4) and the second one follows from the definition of δ. The condition of the theorem in (5) gives us the final step. For any feasible y, the objective score assigned to yp is greater than the score assigned to y according to problem q. That is, yp is the solution to the new problem. The margin-based amortization theorem provides a general, new amortized inference algorithm. Given a new inference problem, we check whether the inequality (5) holds for any previously seen problems in the same equivalence class. If so, we return the cached solution. If no such problem exists, then we make a call to an ILP solver. Even though the theorem provides a condition for two integer linear programs to have the same solution, checking the validity of the condition requires the computation of ∆, which in itself is another integer linear program. To get around this, we observe that if any constraints in Equation 4 are relaxed, the value of the resulting maximum can only increase. Even with the increased ∆, if the condition of the theorem holds, then the rest of the proof follows and hence the new problem will have the same solution. In other words, we can solve relaxed, tractable variants of the maximization in Equation 4 and still retain the guarantees provided by the theorem. The tradeoff is that, by doing so, the condition of the theorem will apply to fewer examples than theoretically possible. In our experiments, we will define the relaxation for each problem individually and even with the relaxations, the inference algorithm based on the margin-based amortization theorem outperforms all previous amortized inference algorithms. The condition in inequality (5) is, in fact, a strict generalization of the condition for Theorem 1 in (Srikumar et al., 2012), stated in (2). If the latter condition holds, then we can show that ∆≤0 and (cq −cp)T yp ≥0. Since δ is, by definition, nonnegative, the margin-based condition is satisfied. 4 Decomposed Amortized Inference One limitation in previously considered approaches for amortized inference stems from the expectation that the same full assignment maximizes the objective score for different inference problems, or equivalently, that the entire structure is repeated multiple times. Even with this assumption, we observe a speedup in prediction. However, intuitively, even if entire structures are not repeated, we expect parts of the assignment to be the same across different instances. In this section, we address the following question: Can we take advantage of the redundancy in components of structures to extend amortization techniques to cases where the full structured output is not repeated? By doing so, we can store partial computation for future inference problems. For example, consider the task of part of speech tagging. While the likelihood of two long sentences having the same part of speech tag sequence is not high, it is much more likely that shorter sections of the sentences will share the same tag sequence. We see from Figure 1 that the number of possible structures for shorter sentences is much smaller than the number of sentences. This implies that many shorter sentences share the same structure, thus improving the performance of an amortized inference scheme for such inputs. The goal of decomposed amortized inference is to extend this improvement to larger problems by increasing the size of equivalence classes. To decompose an inference problem, we use the approach of Lagrangian Relaxation (Lemar´echal, 2001) that has been used successfully for various NLP tasks (Chang and Collins, 2011; Rush and Collins, 2011). We will briefly review the underlying idea1. The goal is to solve an integer linear program q, which is defined as q : max MT y≤b cT qy We partition the constraints into two sets, say C1 denoting M1T y ≤b1 and C2, denoting constraints M2T y ≤b2. The assumption is that in the absence the constraints C2, the inference problem becomes computationally easier to solve. In other words, we can assume the existence of a subroutine that can efficiently compute the solution of the relaxed problem q′: q′ : max M1T y≤b1 cT qy 1For simplicity, we only write inequality constraints in the paper. However, all the results here are easily extensible to equality constraints by removing the non-negativity constraints from the corresponding dual variables. 908 We define Lagrange multipliers Λ ≥0, with one λi for each constraint in C2. For problem q, we can define the Lagrangian as L(y, Λ) = cT qy −ΛT M2T y −b2  Here, the domain of y is specified by the constraint set C1. The dual objective is L(Λ) = max M1T y≤b1 cT qy −ΛT M2T y −b2  = max M1T y≤b1 cq −ΛT M2 T y + ΛT b2. Note that the maximization in the definition of the dual objective has the same functional form as q′ and any approach to solve q′ can be used here to find the dual objective L(Λ). The dual of the problem q, given by minΛ≥0 L(Λ), can be solved using subgradient descent over the dual variables. Relaxing the constraints C2 to define the problem q′ has several effects. First, it can make the resulting inference problem q′ easier to solve. More importantly, removing constraints can also lead to the merging of multiple equivalence classes, leading to fewer, more populous equivalence classes. Finally, removing constraints can decompose the inference problem q′ into smaller independent sub-problems {q1, q2, · · · } such that no constraint that is in C1 has active variables from two different sets in the partition. For the sub-problem qi comprising of variables yi, let the corresponding objective coefficients be cqi and the corresponding sub-matrix of M2 be Mi 2. Now, we can define the dual-augmented subproblem as max Mi 1 T y≤bi 1  cqi −ΛT Mi 2 T yi (6) Solving all such sub-problems will give us a complete assignment for all the output variables. We can now define the decomposed amortized inference algorithm (Algorithm 1) that performs sub-gradient descent over the dual variables. The input to the algorithm is a collection of previously solved problems with their solutions, a new inference problem q and an amortized inference scheme A (such as the margin-based amortization scheme). In addition, for the task at hand, we first need to identify the set of constraints C2 that can be introduced via the Lagrangian. First, we check if the solution can be obtained without decomposition (lines 1–2). Otherwise, Algorithm 1 Decomposed Amortized Inference Input: A collection of previously solved inference problems P, a new problem q, an amortized inference algorithm A. Output: The solution to problem q 1: if TESTCONDITION(A, q, P) then 2: return SOLUTION(A, q, P) 3: else 4: Initialize λi ←0 for each constraint in C2. 5: for t = 1 · · · T do 6: Partition the problem q into subproblems q1, q2, · · · such that no constraint in C1 has active variables from two partitions. 7: for partition qi do 8: yi ←Solve the maximization problem for qi (Eq. 6) using the amortized scheme A. 9: end for 10: Let y ←  y1; y2; · · ·  11: if M2y ≤b2 and (b2 −M2y)iλi = 0 then 12: return y 13: else 14: Λ ←  Λ −µt b2 −M2T y  + 15: end if 16: end for 17: return solution of q using a standard inference algorithm 18: end if we initialize the dual variables Λ and try to obtain the solution iteratively. At the tth iteration, we partition the problem q into sub-problems {q1, q2, · · · } as described earlier (line 6). Each partition defines a smaller inference problem with its own objective coefficients and constraints. We can apply the amortization scheme A to each subproblem to obtain a complete solution for the relaxed problem (lines 7–10). If this solution satisfies the constraints C2 and complementary slackness conditions, then the solution is provably the maximum of the problem q. Otherwise, we take a subgradient step to update the value of Λ using a step-size µt, subject to the constraint that all dual variables must be non-negative (line 14). If we do not converge to a solution in T iterations, we call the underlying solver on the full problem. In line 8 of the algorithm, we make multiple calls to the underlying amortized inference procedure to solve each sub-problem. If the sub909 problem cannot be solved using the procedure, then we can either solve the sub-problem using a different approach (effectively giving us the standard Lagrangian relaxation algorithm for inference), or we can treat the full instance as a cache miss and make a call to an ILP solver. In our experiments, we choose the latter strategy. 5 Experiments and Results Our experiments show two results: 1. The marginbased scheme outperforms the amortized inference approaches from (Srikumar et al., 2012). 2. Decomposed amortized inference gives further gains in terms of re-using previous solutions. 5.1 Tasks We report the performance of inference on two NLP tasks: semantic role labeling and the task of extracting entities and relations from text. In both cases, we used an existing formulation for structured inference and only modified the inference calls. We will briefly describe the problems and the implementation and point the reader to the literature for further details. Semantic Role Labeling (SRL) Our first task is that of identifying arguments of verbs in a sentence and annotating them with semantic roles (Gildea and Jurafsky, 2002; Palmer et al., 2010) . For example, in the sentence Mrs. Haag plays Eltiani., the verb plays takes two arguments: Mrs. Haag, the actor, labeled as A0 and Eltiani, the role, labeled as A1. It has been shown in prior work (Punyakanok et al., 2008; Toutanova et al., 2008) that making a globally coherent prediction boosts performance of SRL. In this work, we used the SRL system of (Punyakanok et al., 2008), where one inference problem is generated for each verb and each inference variables encodes the decision that a given constituent in the sentence takes a specific role. The scores for the inference variables are obtained from a classifier trained on the PropBank corpus. Constraints encode structural and linguistic knowledge about the problem. For details about the formulations of the inference problem, please see (Punyakanok et al., 2008). Recall from Section 3 that we need to define a relaxed version of the inference problem to efficiently compute ∆for the margin-based approach. For a problem instance with coefficients cq and cached coefficients cp, we take the sum of the highest n values of cq −cp as our ∆, where n is the number of argument candidates to be labeled. To identify constraints that can be relaxed for the decomposed algorithm, we observe that most constraints are not predicate specific and apply for all predicates. The only constraint that is predicate specific requires that each predicate can only accept roles from a list of roles that is defined for that predicate. By relaxing this constraint in the decomposed algorithm, we effectively merge all the equivalence classes for all predicates with a specific number of argument candidates. Entity-Relation extraction Our second task is that of identifying the types of entities in a sentence and the relations among them, which has been studied by (Roth and Yih, 2007; Kate and Mooney, 2010) and others. For the sentence Oswald killed Kennedy, the words Oswald and Kennedy will be labeled by the type PERSON, and the KILL relation exists between them. We followed the experimental setup as described in (Roth and Yih, 2007). We defined one inference problem for each sentence. For every entity (which is identified by a constituent in the sentence), an inference variable is introduced for each entity type. For each pair of constituents, an inference variable is introduced for each relation type. Clearly, the assignment of types to entities and relations are not independent. For example, an entity of type ORGANIZATION cannot participate in a relation of type BORN-IN because this relation label can connect entities of type PERSON and LOCATION only. Incorporating these natural constraints during inference were shown to improve performance significantly in (Roth and Yih, 2007). We trained independent classifiers for entities and relations and framed the inference problem as in (Roth and Yih, 2007). For further details, we refer the reader to that paper. To compute the value of ∆for the margin-based algorithm, for a new instance with coefficients cq and cached coefficients cp, we define ∆to be the sum of all non-negative values of cq −cp. For the decomposed inference algorithm, if the number of entities is less than 5, no decomposition is performed. Otherwise, the entities are partitioned into two sets: set A includes the first four entities and set B includes the rest of the entities. We relaxed the relation constraints that go across these two sets of entities to obtain two independent inference problems. 910 5.2 Experimental Setup We follow the experimental setup of (Srikumar et al., 2012) and simulate a long-running NLP process by caching problems and solutions from the Gigaword corpus. We used a database engine to cache ILP and their solutions along with identifiers for the equivalence class and the value of δ. For the margin-based algorithm and the Theorem 1 from (Srikumar et al., 2012), for a new inference problem p ∼[P], we retrieve all inference problems from the database that belong to the same equivalence class [P] as the test problem p and find the cached assignment y that has the highest score according to the coefficients of p. We only consider cached ILPs whose solution is y for checking the conditions of the theorem. This optimization ensures that we only process a small number of cached coefficient vectors. In a second efficiency optimization, we pruned the database to remove redundant inference problems. A problem is redundant if solution to that problem can be inferred from the other problems stored in the database that have the same solution and belong to the same equivalence class. However, this pruning can be computationally expensive if the number of problems with the same solution and the same equivalence class is very large. In that case, we first sampled a 5000 problems randomly and selected the non-redundant problems from this set to keep in the database. 5.3 Results We compare our approach to a state-of-the-art ILP solver2 and also to Theorem 1 from (Srikumar et al., 2012). We choose this baseline because it is shown to give the highest improvement in wall-clock time and also in terms of the number of cache hits. However, we note that the results presented in our work outperform all the previous amortization algorithms, including the approximate inference methods. We report two performance metrics – the percentage decrease in the number of ILP calls, and the percentage decrease in the wall-clock inference time. These are comparable to the speedup and clock speedup defined in (Srikumar et al., 2012). For measuring time, since other aspects of prediction (like feature extraction) are the same across all settings, we only measure the time taken for inference and ignore other aspects. For both 2We used the Gurobi optimizer for our experiments. tasks, we report the runtime performance on section 23 of the Penn Treebank. Note that our amortization schemes guarantee optimal solution. Consequently, using amortization, task accuracy remains the same as using the original solver. Table 1 shows the percentage reduction in the number of calls to the ILP solver. Note that for both the SRL and entity-relation problems, the margin-based approach, even without using decomposition (the columns labeled Original), outperforms the previous work. Applying the decomposed inference algorithm improves both the baseline and the margin-based approach. Overall, however, the fewest number of calls to the solver is made when combining the decomposed inference algorithm with the margin-based scheme. For the semantic role labeling task, we need to call the solver only for one in six examples while for the entity-relations task, only one in four examples require a solver call. Table 2 shows the corresponding reduction in the wall-clock time for the various settings. We see that once again, the margin based approach outperforms the baseline. While the decomposed inference algorithm improves running time for SRL, it leads to a slight increase for the entityrelation problem. Since this increase occurs in spite of a reduction in the number of solver calls, we believe that this aspect can be further improved with an efficient implementation of the decomposed inference algorithm. 6 Discussion Lagrangian Relaxation in the literature In the literature, in applications of the Lagrangian relaxation technique (such as (Rush and Collins, 2011; Chang and Collins, 2011; Reichart and Barzilay, 2012) and others), the relaxed problems are solved using specialized algorithms. However, in both the relaxations considered in this paper, even the relaxed problems cannot be solved without an ILP solver, and yet we can see improvements from decomposition in Table 1. To study the impact of amortization on running time, we modified our decomposition based inference algorithm to solve each sub-problem using the ILP solver instead of amortization. In these experiments, we ran Lagrangian relaxation for until convergence or at most T iterations. After T iterations, we call the ILP solver and solve the original problem. We set T to 100 in one set of exper911 % ILP Solver calls required Method Semantic Role Labeling Entity-Relation Extraction Original + Decomp. Original + Decomp. ILP Solver 100 – 100 – (Srikumar et al., 2012) 41 24.4 59.5 57.0 Margin-based 32.7 16.6 28.2 25.4 Table 1: Reduction in number of inference calls % time required compared to ILP Solver Method Semantic Role Labeling Entity-Relation Extraction Original + Decomp. Original + Decomp. ILP Solver 100 – 100 – (Srikumar et al., 2012) 54.8 40.0 81 86 Margin-based 45.9 38.1 58.1 61.3 Table 2: Reduction in inference time iments (call it Lag1) and T to 1 (call it Lag2). In SRL, compared to solving the original problem with ILP Solver, both Lag1 and Lag2 are roughly 2 times slower. For entity relation task, compared to ILP Solver, Lag1 is 186 times slower and Lag2 is 1.91 times slower. Since we used the same implementation of the decomposition in all experiments, this shows that the decomposed inference algorithm crucially benefits from the underlying amortization scheme. Decomposed amortized inference The decomposed amortized inference algorithm helps improve amortized inference in two ways. First, since the number of structures is a function of its size, considering smaller sub-structures will allow us to cache inference problems that cover a larger subset of the space of possible sub-structures. We observed this effect in the problem of extracting entities and relations in text. Second, removing a constraint need not always partition the structure into a set of smaller structures. Instead, by removing the constraint, examples that might have otherwise been in different equivalence classes become part of a combined, larger equivalence class. Increasing the size of the equivalence classes increases the probability of a cache-hit. In our experiments, we observed this effect in the SRL task. 7 Conclusion Amortized inference takes advantage of the regularities in structured output to re-use previous computation and improve running time over the lifetime of a structured output predictor. In this paper, we have described two approaches for amortizing inference costs over datasets. The first, called the margin-based amortized inference, is a new, provably exact inference algorithm that uses the notion of a structured margin to identify previously solved problems whose solutions can be reused. The second, called decomposed amortized inference, is a meta-algorithm over any amortized inference that takes advantage of previously computed sub-structures to provide further reductions in the number of inference calls. We show via experiments that these methods individually give a reduction in the number of calls made to an inference engine for semantic role labeling and entityrelation extraction. Furthermore, these approaches complement each other and, together give an additional significant improvement. Acknowledgments The authors thank the members of the Cognitive Computation Group at the University of Illinois for insightful discussions and the anonymous reviewers for valuable feedback. This research is sponsored by the Army Research Laboratory (ARL) under agreement W911NF-09-2-0053. The authors also gratefully acknowledge the support of the Defense Advanced Research Projects Agency (DARPA) Machine Reading Program under Air Force Research Laboratory (AFRL) prime contract no. FA8750-09-C-0181. This material also is based on research sponsored by DARPA under agreement number FA8750-13-2-0008. This work has also been supported by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior National Business Center contract number D11PC20155. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the view of ARL, DARPA, AFRL, IARPA, DoI/NBC or the US government. 912 References Y-W. Chang and M. Collins. 2011. Exact decoding of phrase-based translation models through Lagrangian relaxation. EMNLP. J. Clarke and M. Lapata. 2006. Constraint-based sentence compression: An integer programming approach. In ACL. D. Gildea and D. Jurafsky. 2002. Automatic labeling of semantic roles. Computational Linguistics. R. Kate and R. Mooney. 2010. Joint entity and relation extraction using card-pyramid parsing. In Proceedings of the Fourteenth Conference on Computational Natural Language Learning, pages 203–212. Association for Computational Linguistics. T. Koo, A. M. Rush, M. Collins, T. Jaakkola, and D. Sontag. 2010. Dual decomposition for parsing with non-projective head automata. In EMNLP. C. Lemar´echal. 2001. Lagrangian Relaxation. In Computational Combinatorial Optimization, pages 112–156. M. Palmer, D. Gildea, and N. Xue. 2010. Semantic Role Labeling, volume 3. Morgan & Claypool Publishers. V. Punyakanok, D. Roth, and W. Yih. 2008. The importance of syntactic parsing and inference in semantic role labeling. Computational Linguistics. R. Reichart and R. Barzilay. 2012. Multi event extraction guided by global constraints. In NAACL, pages 70–79. S. Riedel and J. Clarke. 2006. Incremental integer linear programming for non-projective dependency parsing. In EMNLP. S. Riedel. 2009. Cutting plane MAP inference for Markov logic. Machine Learning. D. Roth and W. Yih. 2004. A linear programming formulation for global inference in natural language tasks. In Hwee Tou Ng and Ellen Riloff, editors, CoNLL. D. Roth and W. Yih. 2007. Global inference for entity and relation identification via a linear programming formulation. In Lise Getoor and Ben Taskar, editors, Introduction to Statistical Relational Learning. A.M. Rush and M. Collins. 2011. Exact decoding of syntactic translation models through Lagrangian relaxation. In ACL, pages 72–82, Portland, Oregon, USA, June. A. M. Rush, D. Sontag, M. Collins, and T. Jaakkola. 2010. On dual decomposition and linear programming relaxations for natural language processing. In EMNLP. V. Srikumar, G. Kundu, and D. Roth. 2012. On amortizing inference cost for structured prediction. In EMNLP. K. Toutanova, A. Haghighi, and C. D. Manning. 2008. A global joint model for semantic role labeling. Computational Linguistics, 34:161–191. 913
2013
89
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 83–92, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Language-Independent Discriminative Parsing of Temporal Expressions Gabor Angeli Stanford University Stanford, CA 94305 [email protected] Jakob Uszkoreit Google, Inc. 1600 Amphitheatre Parkway Mountain View, CA 94303 [email protected] Abstract Temporal resolution systems are traditionally tuned to a particular language, requiring significant human effort to translate them to new languages. We present a language independent semantic parser for learning the interpretation of temporal phrases given only a corpus of utterances and the times they reference. We make use of a latent parse that encodes a language-flexible representation of time, and extract rich features over both the parse and associated temporal semantics. The parameters of the model are learned using a weakly supervised bootstrapping approach, without the need for manually tuned parameters or any other language expertise. We achieve state-of-the-art accuracy on all languages in the TempEval2 temporal normalization task, reporting a 4% improvement in both English and Spanish accuracy, and to our knowledge the first results for four other languages. 1 Introduction Temporal resolution is the task of mapping from a textual phrase describing a potentially complex time, date, or duration to a normalized (grounded) temporal representation. For example, possibly complex phrases such as the week before last1 are often more useful in their grounded form – e.g., August 4 - August 11. Many approaches to this problem make use of rule-based methods, combining regularexpression matching and hand-written interpretation functions. In contrast, we would like to learn the interpretation of a temporal expression probabilistically. This allows propagation of uncertainty to higher-level components, and the potential to 1Spoken on, for instance, August 20. dynamically back off to a rule-based system in the case of low confidence parses. In addition, we would like to use a representation of time which is broadly applicable to multiple languages, without the need for language-specific rules or manually tuned parameters. Our system requires annotated data consisting only of an input phrase and an associated grounded time, relative to some reference time; the language-flexible parse is entirely latent. Training data of this weakly-supervised form is generally easier to collect than the alternative of manually creating and tuning potentially complex interpretation rules. A large number of languages conceptualize time as lying on a one dimensional line. Although the surface forms of temporal expressions differ, the basic operations many languages use can be mapped to operations on this time line (see Section 3). Furthermore, many common languages share temporal units (hours, weekdays, etc.). By structuring a latent parse to reflect these semantics, we can define a single model which performs well on multiple languages. A discriminative parsing model allows us to define sparse features over not only lexical cues but also the temporal value of our prediction. For example, it allows us to learn that we are much more likely to express March 14th than 2pm in March – despite the fact that both interpretations are composed of similar types of components. Furthermore, it allows us to define both sparse n-gram and denser but less informative bag-of-words features over multi-word phrases, and allows us to handle numbers in a flexible way. We briefly describe our temporal representation and grammar, followed by a description of the learning algorithm; we conclude with experimental results on the six languages of the TempEval-2 A task. 83 2 Related Work Our approach follows the work of Angeli et al. (2012), both in the bootstrapping training methodology and the temporal grammar. Our foremost contributions over this prior work are: (i) the utilization of a discriminative parser trained with rich features; (ii) simplifications to the temporal grammar which nonetheless maintain high accuracy; and (iii) experimental results on 6 different languages, with state-of-the-art performance on both datasets on which we know of prior work. As in this previous work, our approach draws inspiration from work on semantic parsing. The latent parse parallels the formal semantics in previous work. Supervised approaches to semantic parsing prominently include Zelle and Mooney (1996), Zettlemoyer and Collins (2005), Kate et al. (2005), Zettlemoyer and Collins (2007), inter alia. For example, Zettlemoyer and Collins (2007) learn a mapping from textual queries to a logical form. Importantly, the logical form of these parses contain all of the predicates and entities used in the parse – unlike the label provided in our case, where a grounded time can correspond to any of a number of latent parses. Along this line, recent work by Clarke et al. (2010) and Liang et al. (2011) relax supervision to require only annotated answers rather than full logical forms. Related work on interpreting temporal expressions has focused on constructing hand-crafted interpretation rules (Mani and Wilson, 2000; Saquete et al., 2003; Puscasu, 2004; Grover et al., 2010). Of these, HeidelTime (Str¨otgen and Gertz, 2010) and SUTime (Chang and Manning, 2012) provide a strong comparison in English. Recent probabilistic approaches to temporal resolution include UzZaman and Allen (2010), who employ a parser to produce deep logical forms, in conjunction with a CRF classifier. In a similar vein, Kolomiyets and Moens (2010) employ a maximum entropy classifier to detect the location and temporal type of expressions; the grounding is then done via deterministic rules. In addition, there has been work on parsing Spanish expressions; UC3M (Vicente-D´ıez et al., 2010) produce the strongest results on the TempEval-2 corpus. Of the systems entered in the original task, TIPSem (Llorens et al., 2010) was the only system to perform bilingual interpretation for English and Spanish. Both of the above systems rely primarily on hand-built rules. 3 Temporal Representation We define a compositional representation of time, similar to Angeli et al. (2012), but with a greater focus on efficiency and simplicity. The representation makes use of a notion of temporal types and their associated semantic values; a grammar is constructed over these types, and is grounded by appealing to the associated values. A summary of the temporal type system is provided in Section 3.1; the grammar is described in Section 3.2; key modifications from previous work are highlighted in Section 3.3. 3.1 Temporal Types Temporal expressions are represented either as a Range, Sequence, or Duration. The root of a parse tree should be one of these types. In addition, phrases can be tagged as a Function; or, as a special Nil type corresponding to segments without a direct temporal interpretation. Lastly, a type is allocated for numbers. We describe each of these briefly below. Range [and Instant] A period between two dates (or times), as per an interval-based theory of time (Allen, 1981). This includes entities such as Today, 1987, or Now. Sequence A sequence of Ranges, occurring at regular but not necessarily constant intervals. This includes entities such as Friday, November 27th, or last Friday. A Sequence is defined in terms of a partial completion of calendar fields. For example, November 27th would define a Sequence whose year is unspecified, month is November, and day is the 27th; spanning the entire range of the lower order fields (in this case, a day). This example is illustrated in Figure 1. Note that a Sequence implicitly selects a possibly infinite number of possible Ranges. To select a particular grounded time for a Sequence, we appeal to a notion of a reference time (Reichenbach, 1947). For the TempEval-2 corpus, we approximate this as the publication time of the article. While this is conflating Reichenbach’s reference time with speech time, and comes at the expense of certain mistakes (see Section 5.3), it is nonetheless useful in practice. To a first approximation, grounding a sequence given a reference time corresponds to filling in the unspecified fields of the sequence with the fullyspecified fields of the reference time. This pro84 Sequence: year — mon Nov day 27th – 28th week — weekday — hour 00 min 00 sec 00 Reference Time: year 2013 mon Aug day 06th week 32 weekday Tue hour 03 min 25 sec 00 year 2013 mon Nov day 27th – 28th week — weekday — hour 00 min 00 sec 00 Figure 1: An illustration of grounding a Sequence. When grounding the Sequence November 27th with a reference time 2013-08-06 03:25:00, we complete the missing fields in the Sequence (the year) with the corresponding field in the reference time (2013). cess has a number of special cases not enumerated here,2 but the complexity remains constant time. Duration A period of time. This includes entities like Week, Month, and 7 days. A special case of the Duration type is defined to represent approximate durations, such as a few years or some days. Function A function of arity less than or equal to two representing some general modification to one of the above types. This captures semantic entities such as those implied in last x, the third x [of y], or x days ago. The particular functions are enumerated in Table 2. Nil A special Nil type denotes terms which are not directly contributing to the semantic meaning of the expression. This is intended for words such as a or the, which serve as cues without bearing temporal content themselves. Number Lastly, a special Number type is defined for tagging numeric expressions. 3.2 Temporal Grammar Our approach assumes that natural language descriptions of time are compositional in nature; that is, each word attached to a temporal phrase is compositionally modifying the meaning of the phrase. We define a grammar jointly over temporal types and values. The types serve to constrain the parse and allow for coarse features; the values encode specific semantics, and allow for finer features. At the root of a parse tree, we recursively apply 2Some of these special cases are caused by variable days of the month, daylight savings time, etc. Another class arises from pragmatically peculiar utterances; e.g., the next Monday in August uttered in the last week of August should ground to August of next year (rather than the reference time’s year). the functions in the tree to obtain a final temporal value. This approach can be presented as a rule-to-rule translation (Bach, 1976; Allen, 1995, p. 263), or a constrained Synchronous PCFG (Yamada and Knight, 2001). Formally, we define our grammar as G = (Σ, S, V, T, R). The alphabet Σ and start symbol S retain their usual interpretations. We define a set V to be the set of types, as described in Section 3.1. For each v ∈V we define an (infinite) set Tv corresponding to the possible instances of type v. Each node in the tree defines a pair (v, τ) such that τ ∈Tv. A rule R ∈R is defined as a pair R = vi →vjvk, f : (Tvj, Tvk) →Tvi  . This definition is trivially adapted for the case of unary rules. The form of our rules reveals the synchronous aspect of our grammar. The structure of the tree is bound by the first part over types v – these types are used to populate the chart, and allow for efficient inference. The second part is used to evaluate the semantics of the parse, τ ∈Tvi, and allows partial derivations to be discriminated based on richer information than the coarse types. We adopt the preterminals of Angeli et al. (2012). Each preterminal consists of a type and a value; neither which are lexically informed. That is, the word week and preterminal (Week, Duration) are not tied in any way. A total of 62 preterminals are defined corresponding to instances of Ranges, Sequences, and Durations; these are summarized in Table 1. In addition, 10 functions are defined for manipulating temporal expressions (see Table 2). The majority of these mirror generic operations on intervals on a timeline, or manipulations of a sequence. Notably, like intervals, times can be 85 Type Example Instances Range Past, Future, Yesterday, Tomorrow, Today, Reference, Year(n), Century(n) Sequence Friday, January, . . . DayOfMonth, DayOfWeek, . . . EveryDay, EveryWeek, . . . Duration Second, Minute, Hour, Day, Week, Month, Quarter, Year, Decade, Century Table 1: The content-bearing preterminals of the grammar, arranged by their types. Note that the Sequence type contains more elements than enumerated here; however, only a few of each characteristic type are shown here for brevity. Function Description shiftLeft Shift a Range left by a Duration shiftRight Shift a Range right by a Duration shrinkBegin Take the first Duration of a Range shrinkEnd Take the last Duration of a Range catLeft Take the Duration after a Range catRight Take the Duration before a Range moveLeft1 Shift a Sequence left by 1 moveRight1 Shift a Sequence right by 1 nth x of y Take the nth element in y approximate Make a Duration approximate Table 2: The functional preterminals of the grammar. The name and a brief description of the function are given; the functions are most easily interpreted as operations on either an interval or sequence. All operations on Ranges can equivalently be applied to Sequences. moved (3 weeks ago) or their size changed (the first two days of the month), or a new interval can be started from one of the endpoints (the last 2 days). Additionally, a sequence can be modified by shifting its origin (last Friday), or taking the nth element of the sequence within some bound (fourth Sunday in November). Combination rules in the grammar mirror typechecked curried function application. For instance, the function moveLeft1 applied to week (as in last week) yields a grammar rule: ( EveryWeek -1 , Seq. ) ( moveLeft1 , Seq.→Seq. ) ( EveryWeek , Seq. ) In more generality, we create grammar rules for applying a function on either the left or the right, for all possible type signatures of f: f(x, y) ⊙x or x ⊙f(x, y). Additionally, a grammar rule is created for intersecting two Ranges or Sequences, for multiplying a duration by a number, and for absorbing a Nil span. Each of these can be though of as an implicit function application (in the last case, the identity function). 3.3 Differences From Previous Work While the grammar formalism is strongly inspired by Angeli et al. (2012), a number of key differences are implemented to both simplify the framework, and make inference more efficient. Sequence Grounding The most timeconsuming and conceptually nuanced aspect of temporal inference in Angeli et al. (2012) is intersecting Sequences. In particular, there are two modes of expressing dates which resist intersection: a day-of-month-based mode and a week-based mode. Properly grounding a sequence which defines both a day of the month and a day of the week (or week of the year) requires backing off to an expensive search problem. To illustrate, consider the example: Friday the 13th. Although both a Friday and a 13th of the month are easily found, the intersection of the two requires iterating through elements of one until it overlaps with an element of the other. At training time, a number of candidate parses are generated for each phrase. When considering that these parses can become both complex and pragmatically unreasonable, this can result in a noticeable efficiency hit; e.g., during training a sentence could have a [likely incorrect] candidate interpretation of: nineteen ninety-six Friday the 13ths from now. In our Sequence representation, such intersections are disallowed, in the same fashion as February 30th would be. Sequence Pragmatics For the sake of simplicity the pragmatic distribution over possible groundings of a sequence is replaced with the single most likely offset, as learned empirically from the English TempEval-2 corpus by Angeli et al. (2012). No Tag Splitting The Number and Nil types are no longer split according to their ordinality/magnitude and subsumed phrase, respectively. 86 More precisely, there is a single nonterminal (Nil), rather than a nonterminal symbol characterizing the phrase it is subsuming (Nil-the, Nil-a, etc.). This information is encoded more elegantly as features. 4 Learning The system is trained using a discriminative kbest parser, which is able to incorporate arbitrary features over partial derivations. We describe the parser below, followed by the features implemented. 4.1 Parser Inference A discriminative k-best parser was used to allow for arbitrary features in the parse tree. In the first stage, spans of the input sentence are tagged as either text or numbers. A rule-based number recognizer was used for each language to recognize and ground numeric expressions, including information on whether the number was an ordinal (e.g., two versus second). Note that unlike conventional parsing, a tag can span multiple words. Numeric expressions are treated as if the numeric value replaced the expression. Each rule of the parse derivation was assigned a score according to a log-linear factor. Specifically, each rule R = (vi →vjvk, f) with features over the rule and derivation so far φ(R), subject to parameters θ, is given a probability: P(vi | vj, vk, f; θ) ∝eθT φ(R) (1) Na¨ıvely, this parsing algorithm gives us a complexity of O(n3k2), where n is the length of the sentence, and k is the size of the beam. However, we can approximate the algorithm in O(n3k log k) time with cube pruning (Chiang, 2007). With features which are not context-free, we are not guaranteed an optimal beam with this approach; however, empirically the approximation yields a significant efficiency improvement without noticeable loss in performance. Training We adopt an EM-style bootstrapping approach similar to Angeli et al. (2012), in order to handle the task of parsing the temporal expression without annotations for the latent parses. Each training instance is a tuple consisting of the words in the temporal phrase, the annotated grounded time τ ∗, and the reference time. Given an input sentence, our parser will output k possible parses; when grounded to the reference time these correspond to k candidate times: τ1 . . . τk, each with a normalized probability P(τi). This corresponds to an approximate E step in the EM algorithm, where the distribution over latent parses is approximated by a beam of size k. Although for long sentences the number of parses is far greater than the beam size, as the parameters improve, increasingly longer sentences will have correct derivations in the beam. In this way, a progressively larger percentage of the data is available to be learned from at each iteration. To approximate the M step, we define a multiclass hinge loss l(θ) over the beam, and optimize using Stochastic Gradient Descent with AdaGrad (Duchi et al., 2010): l(θ) = max 0≤i<k 1[τi ̸= τ ∗] + Pθ(τi) −Pθ(τ ∗) (2) We proceed to describe our features. 4.2 Features Our framework allows us to define arbitrary features over partial derivations. Importantly, this allows us to condition not only on the PCFG probabilities over types but also the partial semantics of the derivation. We describe the features used below; a summary of these features for a short phrase is illustrated in Figure 2. Bracketing Features A feature is defined over every nonterminal combination, consisting of the pair of children being combined in that rule. In particular, let us consider a rule R = (vi →vjvk, f) corresponding to a CFG rule vi →vjvk over types, and a function f over the semantic values corresponding to vj and vk: τj and τk. Two classes of bracketing features are extracted: features are extracted over the types of nonterminals being combined (vj and vk), and over the top-level semantic derivation of the nonterminals (f, τj, and τk). Unlike syntactic parsing, child types of a parse tree uniquely define the parent type of the rule; this is a direct consequence of our combination rules being functions with domains defined in terms of the temporal types, and therefore necessarily projecting their inputs into a single output type. Therefore, the first class of bracketing features – over types – reduce to have the exact same expressive power as the nonterminal CFG rules of Angeli et al. (2012). Examples of features in this class are features 13 and 15 in Figure 2 (b). 87 Input (w,t) ( Friday of this week , August 6 2013 ) Latent parse FRI ∩ EveryWeek FRI Friday EveryWeek Nil of this EveryWeek week Output τ ∗ August 9 2013 FRI Friday 1. < FRI , Friday > Nil of this 2. < Nil , of > 3. < Nil , this > 4. < Nil , of this > 5. < nil bias > EveryWeek week 6. < EveryWeek , week > EveryWeek Nil EveryWeek 7. < Nil of , EveryWeek > 8. < Nil this , EveryWeek > 9. < Nil of this , EveryWeek > 10. < Nil of , Sequence > 11. < Nil this , Sequence > 12. < Nil of this , Sequence > 13. < Nil , Sequence > 14. < Nil , EveryWeek > FRI ∩ EveryWeek FRI EveryWeek 15. < Sequence , Sequence > 16. < Intersect , FRI , EveryWeek > 17. < root valid > (a) (b) Figure 2: An example parse of Friday of this week, along with the features extracted from the parse. A summary of the input, latent parse, and output for a particular example is given in (a). The features extracted for each fragment of the parse are given in (b), and described in detail in Section 4.2. We now also have the flexibility to extract a second class of features from the semantics of the derivation. We define a feature bracketing the most recent semantic function applied to each of the two child derivations; along with the function being applied in the rule application. If the child is a preterminal, the semantics of the preterminal are used; otherwise, the outermost (most recent) function to be applied to the derivation is used. To illustrate, a tree fragment combining August and 2013 into August 2013 would yield the feature <INTERSECT, AUGUST, 2013>. This can be read as a feature for the rule applying the intersect function to August and 2013. Furthermore, intersecting August 2013 with the 12th of the month would yield a feature <INTERSECT, INTERSECT, 12th>. This can be read as applying the intersect function to a subtree which is the intersection of two terms, and to the 12th of the month. Features 14 and 16 in Figure 2 (b) are examples of such features. Lexical Features The second large class of features extracted are lexicalized features. These are primarily used for tagging phrases with preterminals; however, they are also relevant in incorporating cues from the yield of Nil spans. To illustrate, a week and the week have very different meanings, despite differing by only their Nil tagged tokens. In the first case, a feature is extracted over the value of the preterminal being extracted, and the phrase it is subsuming (e.g., features 1–4 and 6 in Figure 2 (b)). As the type of the preterminal is deterministic from the value, encoding a feature on the type of the preterminal would be a coarser encoding of the same information, and is empirically not useful in this case. Since a multi-word expression can parse to a single nonterminal, a feature is extracted for the entire n-gram in addition to features for each of the individual words. For example, the phrase of this – of type Nil – would have features extracted: <NIL, of>, <NIL, this>, and <NIL, of this>. In the second case – absorbing Nil-tagged spans – we extract features over the words under the Nil span joined with the type and value of the other derivation (e.g., features 7–12 in Figure 2 (b)). As above, features are extracted for both n-grams and for each word in the phrase. For example, combining of this and week would yield features 88 Train Test System Type Value Type Value GUTime 0.72 0.46 0.80 0.42 SUTime 0.85 0.69 0.94 0.71 HeidelTime 0.80 0.67 0.85 0.71 ParsingTime 0.90 0.72 0.88 0.72 OurSystem 0.94 0.81 0.91 0.76 Table 3: English results for TempEval-2 attribute scores for our system and four previous systems. The scores are calculated using gold extents, forcing an interpretation for each parse. Train Test System Type Value Type Value UC3M — — 0.79 0.72 OurSystem 0.90 0.84 0.92 0.76 Table 4: Spanish results for TempEval-2 attribute scores for our system and the best known previous system. The scores are calculated using gold extents, forcing an interpretation for each parse. <of, EVERYWEEK>, <this, EVERYWEEK>, and <of this, EVERYWEEK>. In both cases, numbers are featurized according to their order of magnitude, and whether they are ordinal. Thus, the number tagged from thirty-first would be featurized as an ordinal number of magnitude 2. Semantic Validity Although some constraints can be imposed to help ensure that a top-level parse will be valid, absolute guarantees are difficult. For instance, February 30 is never a valid date; but, it would be difficult to disallow any local rule in its derivation. To mediate this, an indicator feature is extracted denoting whether the grounded semantics of the derivation is valid. This is illustrated in Figure 2 (b) by feature 17. Nil Bias Lastly, an indicator feature is extracted for each Nil span tagged (feature 5 in Figure 2 (b)). In part, this discourages over-generation of the type; in another part, it encourages Nil spans to absorb as many adjacent words as possible. We proceed to describe our experimental setup and results. 5 Evaluation We evaluate our model on all six languages in the TempEval-2 Task A dataset (Verhagen et al., 2010), comparing against state-of-the-art systems for English and Spanish. New results are reported on smaller datasets from the four other languages. To our knowledge, there has not been any prior work on these corpora. We describe the TempEval-2 datasets in Section 5.1, present experimental results in Section 5.2, and discuss system errors in Section 5.3. 5.1 TempEval-2 Datasets TempEval-2, from SemEval 2010, focused on retrieving and reasoning about temporal information from newswire. Our system evaluates against Task A – detecting and resolving temporal expressions. Since we perform only the second of these, we evaluate our system assuming gold detection. The dataset annotates six languages: English, Spanish, Italian, French, Chinese, and Korean; of these, English and Spanish are the most mature. We describe each of these languages, along with relevant quirks, below: English The English dataset consists of 1052 training examples, and 156 test examples. Evaluation was done using the official evaluation script, which checks for exact match between TIMEX3 tags. Note that this is stricter than our training objective; for instance, 24 hours and a day have the same interpretation, but have different TIMEX3 strings. System output was heuristically converted to the TIMEX3 format; where ambiguities arose, the convention which maximized training accuracy was chosen. Spanish The Spanish dataset consists of 1092 training examples, and 198 test examples. Evaluation was identical to the English, with the heuristic TIMEX3 conversion adapted somewhat. Italian The Italian dataset consists of 523 training examples, and 126 test examples. Evaluation was identical to English and Spanish. Chinese The Chinese dataset consists of 744 training examples, and 190 test examples. Of these, only 659 training and 143 test examples had a temporal value marked; the remaining examples had a type but no value, and are therefore impossible to predict. Results are also reported on a clean corpus with these impossible examples omitted. The Chinese, Korean, and French corpora had noticeable inconsistencies in the TIMEX3 annotation. Thus, evaluations are reported according 89 Train Test Language # examples Type Value # examples Type Value English 1052 0.94 0.81 156 0.91 0.76 Spanish 1092 0.90 0.84 198 0.92 0.76 Italian 523 0.89 0.85 126 0.84 0.38 Chinese† 744 0.95 0.65 190 0.87 0.48 Chinese (clean)† 659 0.97 0.73 143 0.97 0.60 Korean† 247 0.83 0.67 91 0.82 0.42 French† 206 0.78 0.76 83 0.78 0.35 Table 5: Our system’s accuracy on all 6 languages of the TempEval-2 corpus. Chinese is divided into two results: one for the entire corpus, and one which considers only examples for which a temporal value is annotated. Languages with a dagger (†) were evaluated based on semantic rather than string-match correctness. to the training objective: if two TIMEX3 values ground to the same grounded time, they are considered equal. For example, in the example above, 24 hours and a day would be marked identical despite having different TIMEX3 strings. Most TIMEX3 values convert naturally to a grounded representation; values with wildcards representing Sequences (e.g., 1998-QX or 1998-XX-12) ground to the same value as the Sequence encoding that value would. For instance, 1998-QX is parsed as every quarter in 1998. Korean The Korean dataset consists of 287 training examples, and 91 test examples. 40 of the training examples encoded dates as a long integer For example: 003000000200001131951006 grounds to January 13, 2000 at the time 19:51. These were removed from the training set, yielding 247 examples; however, all three such examples were left in the test set. Evaluation was done identically to the Chinese data. French Lastly, a dataset for French temporal expressions was compiled from the TempEval-2 data. Unlike the other 5 languages, the French data included only the raw TIMEX3 annotated newswire documents, encoded as XML. These documents were scraped to recover 206 training examples and 83 test examples. Evaluation was done identically to the Chinese and Korean data. We proceed to describe our experimental results on these datasets. 5.2 Results We compare our system with state-of-the-art systems for both English and Spanish. To the best of our knowledge, no prior work exists for the other four languages. We evaluate in the same framework as Angeli et al. (2012). We compare to previous system scores when constrained to make a prediction on every example; if no guess is made, the output is considered incorrect. This in general yields lower results for those systems, as the system is not allowed to abstain on expressions it does not recognize. The systems compared against are: • GUTime (Mani and Wilson, 2000), a widely used, older rule-based system. • HeidelTime (Str¨otgen and Gertz, 2010), the top system at the TempEval-2 task for English. • SUTime (Chang and Manning, 2012), a more recent rule-based system for English. • ParsingTime (Angeli et al., 2012), a semantic parser for temporal expressions, similar to this system (see Section 2). • UC3M (Vicente-D´ıez et al., 2010), a rulebased system for Spanish. Results for the English corpus are shown in Table 3. Results for Spanish are shown in Table 4. Lastly, a summary of results in all six languages is shown in Table 5. A salient trend emerges from the results – while training accuracy is consistently high, test accuracy drops sharply for the data-impoverished languages. This is consistent with what would be expected from a discriminatively trained model in data-impoverished settings; however, the consistent training accuracy suggests that the model nonetheless captures the phenomena it sees in 90 Error Class English Spanish Pragmatics 29% 23% Type error 16% 5% Incorrect number 10% 5% Relative Range 7% 2% Incorrect parse 19% 36% Missing context 16% 23% Bad reference time 3% 6% Table 6: A summary of errors of our system, by percentage of incorrect examples for the English and Spanish datasets. The top section describes errors which could be handled in our framework, while the bottom section describes examples which are either ambiguous (missing context), or are annotated inconsistently relative the reference time. training. This suggests the possibility for improving accuracy further by making use of more data during training. 5.3 Discussion We characterize the examples our system parses incorrectly on the English and Spanish datasets in Table 6, expanding on each class of error below. Pragmatics This class of errors is a result of pragmatic ambiguity over possible groundings of a sequence – for instance, it is often ambiguous whether next weekend refers to the coming or subsequent weekend. These errors manifest in either dropping a function (next, last), or imagining one that is not supported by the text (e.g., this week parsed as next week). Type error Another large class of errors – particularly in the English dataset – arise from not matching the annotation’s type, but otherwise producing a reasonable response. For instance, the system may mistake a day on the calendar (a Range), with a day, the period of time. Incorrect number A class of mistakes arises from either omitting numbers from the parse, or incorrectly parsing numbers – the second case is particularly prevalent for written years, such as seventeen seventy-six. Relative Range These errors arise from attempting to parse a grounded Range by applying functions to the reference time. For example, from a reference time of August 8th, it is possible to “correctly” parse the phrase August 1 as a week ago; but, naturally, this parse does not generalize well. This class of errors, although relatively small, merits special designation as it suggests a class of correct responses which are correct for the wrong reasons. Future work could explore mitigating these errors for domains where the text is further removed from the events it is describing than most news stories are. Incorrect parse Errors in this class are a result of failing to find the correct parse, for a number of reasons not individually identified. A small subset of these errors (notably, 6% on the Spanish dataset) are a result of the grammar being insufficiently expressive with the preterminals we defined. For instance, we cannot capture fractional units, such as in half an hour. Missing context A fairly large percentage of our errors arise from failing to classify inputs which express ambiguous or poorly defined times. For example, from time to time (annotated as the future), or that time (annotated as 5 years). Many of these require either some sort of inference or a broader understanding of the context in which the temporal phrase is uttered, which our system does not attempt to capture. Bad reference time The last class of errors cover cases where the temporal phrase is clear, but annotation differs from our judgment of what would be reasonable. These are a result of assuming that the reference time of an utterance is the publication time of the article. 6 Conclusion We have presented a discriminative, multilingual approach to resolving temporal expressions, using a language-flexible latent parse and rich features on both the types and values of partial derivations in the parse. We showed state-of-the-art results on both languages in TempEval-2 with prior work, and presented results on four additional languages. Acknowledgments Work was done in the summer of 2012 while the first author was an intern at Google. We would like to thank Chris Manning, and our co-workers at Google for their insight and help. 91 References James F. Allen. 1981. An interval-based representation of temporal knowledge. In Proceedings of the 7th international joint conference on Artificial intelligence, pages 221–226, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. James Allen. 1995. Natural Language Understanding. Benjamin/Cummings, Redwood City, CA. Gabor Angeli, Christopher D. Manning, and Daniel Jurafsky. 2012. Parsing time: Learning to interpret time expressions. In NAACL-HLT. E. Bach. 1976. An extension of classical transformational grammar. In Problems of Linguistic Metatheory (Proceedings of the 1976 Conference), Michigan State University. Angel Chang and Chris Manning. 2012. SUTIME: a library for recognizing and normalizing time expressions. In Language Resources and Evaluation. David Chiang. 2007. Hierarchical phrase-based translation. computational linguistics, 33(2):201–228. James Clarke, Dan Goldwasser, Ming-Wei Chang, and Dan Roth. 2010. Driving semantic parsing from the world’s response. In CoNLL, pages 18–27, Uppsala, Sweden. John Duchi, Elad Hazan, and Yoram Singer. 2010. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121–2159. Claire Grover, Richard Tobin, Beatrice Alex, and Kate Byrne. 2010. Edinburgh-LTG: TempEval-2 system description. In Proceedings of the 5th International Workshop on Semantic Evaluation, Sem-Eval, pages 333–336. Rohit J. Kate, Yuk Wah Wong, and Raymond J. Mooney. 2005. Learning to transform natural to formal languages. In AAAI, pages 1062–1068, Pittsburgh, PA. Oleksandr Kolomiyets and Marie-Francine Moens. 2010. KUL: recognition and normalization of temporal expressions. In Proceedings of the 5th International Workshop on Semantic Evaluation, Sem-Eval ’10, pages 325–328. P. Liang, M. I. Jordan, and D. Klein. 2011. Learning dependency-based compositional semantics. In ACL. Hector Llorens, Estela Saquete, and Borja Navarro. 2010. Tipsem (english and spanish): Evaluating crfs and semantic roles in tempeval-2. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 284–291. Inderjeet Mani and George Wilson. 2000. Robust temporal processing of news. In ACL, pages 69–76, Hong Kong. G. Puscasu. 2004. A framework for temporal resolution. In LREC, pages 1901–1904. Hans Reichenbach. 1947. Elements of Symbolic Logic. Macmillan, New York. E. Saquete, R. Muoz, and P. Martnez-Barco. 2003. Terseo: Temporal expression resolution system applied to event ordering. In Text, Speech and Dialogue, pages 220–228. Jannik Str¨otgen and Michael Gertz. 2010. Heideltime: High quality rule-based extraction and normalization of temporal expressions. In Proceedings of the 5th International Workshop on Semantic Evaluation, Sem-Eval, pages 321–324. Naushad UzZaman and James F. Allen. 2010. TRIPS and TRIOS system for TempEval-2: Extracting temporal information from text. In Proceedings of the 5th International Workshop on Semantic Evaluation, Sem-Eval, pages 276–283. Marc Verhagen, Roser Sauri, Tommaso Caselli, and James Pustejovsky. 2010. Semeval-2010 task 13: TempEval-2. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 57– 62, Uppsala, Sweden. Mar´ıa Teresa Vicente-D´ıez, Juli´an Moreno Schneider, and Paloma Mart´ınez. 2010. Uc3m system: Determining the extent, type and value of time expressions in tempeval-2. In proceedings of the Semantic Evaluation–2 (Semeval 2010), ACL Conference, Uppsala (Sweden). Kenji Yamada and Kevin Knight. 2001. A syntaxbased statistical translation model. In ACL, pages 523–530. John M. Zelle and Raymond J. Mooney. 1996. Learning to parse database queries using inductive logic programming. In AAAI/IAAI, pages 1050–1055, Portland, OR. Luke S. Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In UAI, pages 658–666. AUAI Press. Luke S. Zettlemoyer and Michael Collins. 2007. Online learning of relaxed CCG grammars for parsing to logical form. In EMNLP-CoNLL, pages 678–687. 92
2013
9
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 914–923, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Semi-Supervised Semantic Tagging of Conversational Understanding using Markov Topic Regression Asli Celikyilmaz Microsoft Mountain View, CA, USA [email protected] Dilek Hakkani-Tur, Gokhan Tur Microsoft Research Mountain View, CA, USA [email protected] [email protected] Ruhi Sarikaya Microsoft Redmond, WA, USA [email protected] Abstract Finding concepts in natural language utterances is a challenging task, especially given the scarcity of labeled data for learning semantic ambiguity. Furthermore, data mismatch issues, which arise when the expected test (target) data does not exactly match the training data, aggravate this scarcity problem. To deal with these issues, we describe an efficient semisupervised learning (SSL) approach which has two components: (i) Markov Topic Regression is a new probabilistic model to cluster words into semantic tags (concepts). It can efficiently handle semantic ambiguity by extending standard topic models with two new features. First, it encodes word n-gram features from labeled source and unlabeled target data. Second, by going beyond a bag-of-words approach, it takes into account the inherent sequential nature of utterances to learn semantic classes based on context. (ii) Retrospective Learner is a new learning technique that adapts to the unlabeled target data. Our new SSL approach improves semantic tagging performance by 3% absolute over the baseline models, and also compares favorably on semi-supervised syntactic tagging. 1 Introduction Semantic tagging is used in natural language understanding (NLU) to recognize words of semantic importance in an utterance, such as entities. Typically, a semantic tagging model require large amount of domain specific data to achieve good performance (Tur and DeMori, 2011). This requires a tedious and time intensive data collection and labeling process. In the absence of large labeled training data, the tagging model can behave poorly on test data (target domain). This is usually caused by data mismatch issues and lack of coverage that arise when the target data does not match the training data. To deal with these issues, we present a new semi-supervised learning (SSL) approach, which mainly has two components. It initially starts with training supervised Conditional Random Fields (CRF) (Lafferty et al., 2001) on the source training data which has been semantically tagged. Using the trained model, it decodes unlabeled dataset from the target domain. With the data mismatch issues in mind, to correct errors that the supervised model make on the target data, the SSL model leverages the additional information by way of a new clustering method. Our first contribution is a new probabilistic topic model, Markov Topic Regression (MTR), which uses rich features to capture the degree of association between words and semantic tags. First, it encodes the n-gram context features from the labeled source data and the unlabeled target data as prior information to learn semantic classes based on context. Thus, each latent semantic class corresponds to one of the semantic tags found in labeled data. MTR is not invariant to reshuffling of words due to its Markovian property; hence, word-topic assignments are also affected by the topics of the surrounding words. Because of these properties, MTR is less sensitive to the errors caused by the semantic ambiguities. Our SSL uses MTR to smooth the semantic tag posteriors on the unlabeled target data (decoded using the CRF model) and later obtains the best tag sequences. Using the labeled source and automati914 cally labeled target data, it re-trains a new CRFmodel. Although our iterative SSL learning model can deal with the training and test data mismatch, it neglects the performance effects caused by adapting the source domain to the target domain. In fact, most SSL methods used for adaptation, e.g., (Zhu, 2005), (Daum´e-III, 2010), (Subramanya et al., 2010), etc., do not emphasize this issue. With this in mind, we introduce a new iterative training algorithm, Retrospective Learning, as our second contribution. While retrospective learning iteratively trains CRF models with the automatically annotated target data (explained above), it keeps track of the errors of the previous iterations so as to carry the properties of both the source and target domains. In short, through a series of experiments we show how MTR clustering provides additional information to SSL on the target domain utterances, and greatly impacts semantic tagging performance. Specifically, we analyze MTR’s performance on two different types of semantic tags: named-entities and descriptive tags as shown in Table 1. Our experiments show that it is much harder to detect descriptive tags compared to named-entities. Our SSL approach uses probabilistic clustering method tailored for tagging natural language utterances. To the best of our knowledge, our work is the first to explore the unlabeled data to iteratively adapt the semantic tagging models for target domains, preserving information from the previous iterations. With the hope of spurring related work in domains such as entity detection, syntactic tagging, etc., we extend the earlier work on SSL partof-speech (POS) tagging and show in the experiments that our approach is not only useful for semantic tagging but also syntactic tagging. The remainder of this paper is divided as follows: §2 gives background on SSL and semantic clustering methods, §3 describes our new clustering approach, §4 presents the new iterative learning, §5 presents our experimental results and §6 concludes our paper. 2 Related Work and Motivation (I) Semi-Supervised Tagging. Supervised methods for semantic tagging in NLU require a large number of in-domain human-labeled utterances and gazetteers (movie, actor names, etc.), increas• Are there any [comedies] with [Ryan Gosling]? • How about [oscar winning] movies by [James Cameron]? • Find [Woody Allen] movies similar to [Manhattan]. [Named Entities] director: James Cameron, Woody Allen,... actor: Ryan Gosling, Woody Allen,... title: Manhattan, Midnight in Paris,... [Descriptive Tags] restriction: similar, suitable, free,rate,... description: oscar winning, new release, gardening,... genre: spooky, comedies, feel good, romance,... Table 1: Samples of semantically tagged utterances from movie domain, named-entities and descriptive tags. ing the need for significant manual labor (Tur and DeMori, 2011). Recent work on similar tasks overcome these challenges using SSL methods as follows: • (Wang et al., 2009; Li et al., 2009; Li, 2010; Liu et al., 2011) investigate web query tagging using semi-supervised sequence models. They extract semantic lexicons from unlabeled web queries, to use as features. Our work differs from these, in that, rather than just detecting named-entities, our utterances include descriptive tags (see Table 1). • Typically the source domain has different distribution than the target domain, due to topic shifts in time, newly introduced features (e.g., until recently online articles did not include facebook ”like” feature.), etc. Adapting the source domain using unlabeled data is the key to achieving good performance across domains. Recent adaptation methods for SSL use: expectation minimization (Daum´e-III, 2010) graph-based learning (Chapelle et al., 2006; Zhu, 2005), etc. In (Subramanya et al., 2010) an efficient iterative SSL method is described for syntactic tagging, using graph-based learning to smooth POS tag posteriors. However, (Reisinger and Mooney, 2011) argues that vector space models, such as graph-learning, may fail to capture the richness of word meaning, as similarity is not a globally consistent metric. Rather than graph-learning, we present a new SSL using a probabilistic model, MTR, to cluster words based on co-occurrence statistics. • Most iterative SSL methods, do not keep track of the errors made, nor consider the divergence from the original model. (Lavoie et al., 2011) argues that iterative learning models should mitigate new errors made by the model at each iteration by 915 keeping the history of the prior predictions. This ensures that a penalty is paid for diverging from the previous model’s predictions, which will be traded off against the benefit of reducing classification loss. We present a retrospective SSL for CRF, in that, the iterative learner keeps track of the errors of the previous iterations so as to carry the properties of both the source and target domains. (II) Semantic Clustering. A common property of several context-based word clustering techniques, e.g., Brown clustering (Brown et al., 1992), Clustering by Committee (Pantel, 2003), etc., is that they mainly cluster based on local context such as nearby words. Standard topic models, such as Latent Dirichlet Allocation (LDA) (Blei et al., 2003), use a bag-of-words approach, which disregards word order and clusters words together that appear in a similar global context. Such models have been effective in discovering lexicons in many NLP tasks, e.g., named-entity recognition (Guo et al., 2009), word-sense disambiguation (Boyd-Graber et al., 2007; Li et al., 2010), syntactic/semantic parsing (Griffiths et al., 2005; Singh et al., 2010), speaker identification (Nyugen et al., 2012), etc. Recent topic models consider word sequence information in documents (Griffiths et al., 2005; Moon et al., 2010). The Hidden Topic Markov Model (HTMM) by (Gruber et al., 2005), for instance, models sentences in documents as Markov chains, assuming all words in a sentence have the same topic. While MTR has a similar Markovian property, we encode features on words to allow each word in an utterance to sample from any of the given semantic tags, as in ”what are [scary]genre movies by [Hitchcock]director?”. In LDA, common words tend to dominate all topics causing related words to end up in different topics. In (Petterson et al., 2010), the vectorbased features of words are used as prior information in LDA so that the words that are synonyms end up in same topic. Thus, we build a semantically rich topic model, MTR, using word context features as side information. Using a smoothing prior for each word-topic pair (instead of a constant β smoother), MTR assures that the words are distributed over topics based on how similar they are. (e.g., ”scary” and ”spooky”, which have similar context features, go into the same semantic tag, ”genre”). Thus, to best of our knowledge, MTR is the first topic model to incorporate word features while considering the sequence of words. 3 Markov Topic Regression - MTR 3.1 Model and Abstractions LDA assumes that the latent topics of documents are sampled independently from one of K topics. MTR breaks down this independence assumption by allowing Markov relations between the hidden tags to capture the relations between consecutive words (as sketched in Figure 1 and Algorithm 1). (I) Semantic Tags (si): Each word wi of a given utterance with Nj words, uj={wi}Nj i=1∈U, j=1,..|U|, from a set of utterances U, is associated with a latent semantic tag (state) variable si∈S, where S is the set of semantic tags. We assume a fixed K topics corresponding to semantic tags of labeled data. In a similar way to HTMM (Gruber et al., 2005) described for documents, MTR samples each si from a Markov chain that is specific to its utterance uj. Each state si generates a word, wi, based on the word-state co-occurrences. MTR allows for sampling of consecutive words from different tag clusters. The initial probabilities of the latent states are sampled from a Dirichlet distribution over state variables, θj, with α hyperparameter for each uj. (II) Tag Transition Indicator (ψv): Given utterance uj, the decision to sample a wi from a new topic is determined by an indicator variable, cj,i, that is sampled from a Binomial(ψv=wi) distribution with a Beta conjugate prior. (There are v binomials for each vocabulary term.) cj,i=1 suggests that a new state be sampled from K possible tags for the word wi in uj, and cj,i=0 suggests that the state si of wi should be the same as the previous word’s latent state si−1. The first position of the sequence is sampled from a new state, hence cj,i=1=1. (III) Tag Transition Base Measure (η): Prior probability of a word given a tag should increase the chances of sampling words from the correct semantic tag. MTR constrains the generation of a tag si given the previous tag si−1 and the current wi based on cj,i by using a vocabulary specific Beta prior, ψv∼Beta(ηv) 1, on each word in vocabulary wv=1,..V . We inject the prior information on semantic tags to define values of the base measure ηv using external knowledge from two sources: (a) Entity Priors (ηS): Prior probability on named-entities and descriptive tags denoted as 1For each beta distribution we use symmetric Beta(ηv)=Beta(α=ηv,β=ηv). 916 latent semantic tag distribution over semantic tags s1 ... w1 ... !j " c2 c3 # $kv %kv xv &k s2 s3 w2 w3 wn ' V K |U| V indicator for sampling semantic tags vocabulary features as prior information semantic tag dependent smoothing coefficient semantic tag indicator parameter prior on per-word state transitions $k ! Dir(%kv|x;&k) !k = exp(f(x;&k)) semantic tag distribution over tags smoother for tag-word pair cNj sNj Figure 1: The graph representation of the Markov Topic Regression (MTR). To demonstrate hidden state Markov Chain, the generation of each word is explicitly shown (inside of the plate). ηS=p(si|si−1,wi=v,wi−1). We use web sources (wiki pages on movies and urls such as imdb.com) and labeled training data to extract entity lists that correspond to the semantic tags of our domains. We keep the frequency of each n-gram to convert into (empirical) prior probability distribution. (b) Language Model Prior (ηW ): Probabilities on word transitions denoted as ηW =p(wi=v|wi−1). We built a language model using SRILM (Stolcke, 2002) on the domain specific sources such as top wiki pages and blogs on online movie reviews, etc., to obtain the probabilities of domain-specific n-grams, up to 3-grams. The observed priors, ηS and ηW , are used for calculating the base measure η for each vocabulary wv as: ηsi|si−1 v = ( ηsi|si−1,wi=v S , if ηsi|si−1,wi=v S exists, ηwi=v,wi−1 W , otherwise (1) In Eq.(1), we assume that the prior on the semantic tags, ηS, is more indicative of the decision for sampling a wi from a new tag compared to language model posteriors on word sequences, ηW . Here we represent the base-measure (hyperparameter) of the semantic tag indicator variable, which is not to be confused with a probability measure 2 We update the indicator parameter via mean criteria, ψv=wi=PK i,j=1ηsi|sj v=wi/(K2). If no prior on 2The base-measure used in Eq.(1) does not relate to a back-off model in LM sense. Here, instead of using a constant value for the hyper-parameters, we use probability scores that we obtain from LM. Algorithm 1 Markov Topic Regression 1: for each semantic tag topic sk, k ←1, ..., K do 2: −draw a topic mixture φk ∼Dir(βk|λk, x), 3: −let βk=exp(f(x;λk)); x={xv}Vl v=1, βk∈RVl 4: for each word wv in vocabulary v ←1, ..., V do 5: −draw a tag indicator mixture ψv ∼Beta(η), 6: for each utterance j ←1, ..., |U| do 7: −draw transition distribution θs j ∼Dir(α) 8: over states si and set cj1=1. 9: −for words wi in uj, i ←1, ..., Nj do 10: ⋄if i >1, toss a coin cj,i ∼Binomial(ψwi). 11: ⋄If cj,i=1, draw si∼Multi(θ si,si−1 j )† 12: otherwise si=si−1. 13: ⋄Sample wi∼Multi(φsi). † Markov assumption over utterance words is used (See Eq.(4)). a specific word exists, a default value is used for base measure, ηv=0.01. (IV) Topic-Word Distribution Priors (βk): Different from (Mimno et al., 2008), which uses asymmetric hyper-parameters on document-topic distributions, in MTR, we learn the asymmetric hyper-parameters of the semantic tag-word distributions. We use blocked Gibbs sampling, in which the topic assignments sk and hyper-parameters {βk}K k=1 are alternately sampled at each Gibbs sampling lag period g given all other variables. We impose the prior knowledge on naturally related words, such that if two words ”funny” and ”hilarious” indicate the same given ”genre” class, then their latent tag distributions should also be similar. We enforce this on smoothing parameter βk,v, e.g., βk,′funny′∼βk,′hilarious′ for a given tag k as follows: At each g lag period of the Gibbs sampling, K log-linear models with parameters, λ(g) k ∈RM, is trained to predict β(g) kv ∈βk, for each wv of a tag sk: β(g) k = exp(f(xl; λ(g) k )) (2) where the log-linear function f is: n(g) kv = f(xl v; λ(g) k ) = X m λ(g) k,mxl v,m (3) Here x∈RV ×M is the input matrix x, wherein rows xv∈RM represents M-dimensional scalar vector of explanatory features on vocabulary words. We use the word-tag posterior probabilities obtained from a CRF sequence model trained on labeled utterances as features. The x={xl,xu} has labeled (l) and unlabeled (u) parts. The labeled part contains Vl size vocabulary of which we know the semantic tags, xl={(xl 1,s1),...,(xl Vl,sVl)}. At the start of the Gibbs sampling, we designate the 917 K latent topics to the K semantic tags of our labeled data. Therefore, we assign labeled words to their designated topics. This way we use observed scalar counts of each labeled word v associated with its semantic tag k, n(g) kv , as the output label of its input vector, xl v; an indication of likelihood of words getting sampled from the corresponding semantic label sk. Since the impact of the asymmetric prior is equivalent to adding pseudocounts to the sufficient statistics of the semantic tag to which the word belongs, we predict the pseudo-counts β(g) kv using the scalar counts of the labeled data, n(g) kv , based on the log-linear model in Eq. (2). At g=0, we use β(0) kv =28, if xv∈Xl; otherwise β(0) kv =2−2, commonly used values for large and small β. Note that larger β-values indicate correlation between the word and the topic. 3.2 Collapsed Sampler The goal of MTR is to infer the degree of relationship between a word v and each semantic tag k, φkv. To perform inference we need two components: • a sampler which can draw from conditional PMTR(sji=k|sji−1, s\ji, α, ψi, βji), when cj,i=1, where sji and sji−1 are the semantic tags of the current wi=v of vocabulary v and previous word wi−1 in utterance uj, and s\ji are the semantic tag topics of all words except for wi; and, • an estimation procedure for (βkv, λk) (see §3.1). We integrate out the multinomial and binomial parameters of the model: utterance-tag distributions θj, binomial state transition indicator distribution per each word ψv, and φk for tag-word distributions. We use collapsed Gibbs sampling to reduce random components and model the posterior distribution by obtaining samples (sji, cj,i) drawn from this distribution. Under the Markov assumption, for each word wi=v in a given utterance uj, if cj,i=1, we sample a new tag si=k given the remaining tags and hyper-parameters βk, α, and ηsi|si−1 wi=v . Using the following parameters; n(si) ji , which is the number of words assigned to a semantic class si=k excluding case i, and n(si−1) si is the number of transitions from class si−1 to si, where indicator I(si−1, si)=1 if slot si=si−1, the update equation is formulated as follows: p(sji = k|w, s−ji, α, ηsi|si−1 wi , βk) ∝ n(si) ji + βkwi n(k) (.) + P v βkv ∗(n(si−1) si + α)∗ (n(si) si+1 + I(si−1, si) + I(si+1, si) + α) n(si) (.) + I(si−1, k) + Kα (4) 4 Semi-Supervised Semantic Labeling 4.1 Semi Supervised Learning (SSL) with CRF In (Subramanya et al., 2010), a new SSL method is described for adapting syntactic POS tagging of sentences in newswire articles along with search queries to a target domain of natural language (NL) questions. They decode unlabeled queries from target domain (t) using a CRF model trained on the POS-labeled newswire data (source domain (o)). The unlabeled POS tag posteriors are then smoothed using a graph-based learning algorithm. On graph, the similarities are defined over sequences by constructing the graph over types, word 3-grams, where types capture the local context of words. Since CRF tagger only uses local features of the input to score tag pairs, they try to capture all the context with the graph with additional context features on types. Later, using viterbi decoding, they select the 1-best POS tag sequence, s∗ j for each utterance uj. Graph-based SSL defines a new CRF objective function: Λ(t) n+1 =argmin Λ∈RK ( −P j=1:l log p(sj|uj; Λ(t) n ) + µ∥Λ(t) n ∥2 ) − n τ Pl+u j=l log pn(s∗ j|uj; Λ(t) n ) o (5) The first bracket in Eq.(5) is the loss on the labeled data and L2 regularization on parameters, Λ(t) n , from nth iteration, same as standard CRF. The last term is the loss on unlabeled data from target domain with a hyper-parameter τ. They use a small value for τ to enable the new model to be as close as possible to the initial model trained on source data. 4.2 Retrospective Semi-Supervised CRF We describe a Retrospective SSL (R-SSL) training with CRF (Algorithm 2), using MTR as a 918 smoothing model, instead of a graph-based model, as follows: I. DECODING and SMOOTHING. The posterior probability of a tag sji=k given a word wji in unlabeled utterance uj from target domain (t) ˆpn(j, i)=ˆpn(sji=k|wji; Λ(t) n ), is decoded using the n-th iteration CRF model. MTR uses the decoded probabilities as semantic tag prior features on vocabulary items. We generate a word-tag matrix of posteriors, x∈(0, 1)V ×K, where K is the number of semantic tags and V is the vocabulary size from n-th iteration. Each row is a K dimensional vector of tag posterior probabilities xv={xv1,. . . xvK} on the vocabulary term, wv. The labeled rows xl of the vocabulary matrix, x={xl,xu}, contain only {0,1} values, indicating the word’s observed semantic tags in the labeled data. Since a labeled term wv can have different tags (e.g., ”clint eastwood” may be tagged as actor-name and directorname in the training data), PK k xvk≥1 holds. The x is used as the input matrix of the kth log-linear model (corresponding to kth semantic tag (topic)) to infer the β hyper-parameter of MTR in Eq. (2). MTR generates smoothed conditional probabilities φkv for each vocabulary term v given semantic tag k. II. INTERPOLATION. For each word wji=v in unlabeled utterance uj, we interpolate tag marginals from CRF and MTR for each semantic tag sji = k: ˆqn(sji|wij; Λ(t) n ) = π CRF posterior z }| { ˆpn(sji|wij; Λ(t) n ) +(1 −π) MTR z}|{ φkv (6) III. VITERBI. Using viterbi decoding over the tag marginals, ˆqn(sji|wij; Λ(t) n ), and transition probabilities obtained from the CRF model of n-th iteration, we get ˆpn(s∗ j|uj; Λ(t) n ), the 1-best decode s∗ j of each unlabeled utterance uj∈Uu n. IV. RETROSPECTIVE SSL (R-SSL). After we decode the unlabeled data, we re-train a new CRF model at each iteration. Each iteration makes predictions on the semantic tags of unlabeled data with varying posterior probabilities. Motivated by (Lavoie et al., 2011), we want the loss function to have a dependency on the prior model predictions. Thus, R-SSL encodes the history of the prior preAlgorithm 2 Retrospective Semi-Supervised CRF Input: Labeled Ul, and unlabeled Uu data. Process: Λ(o) n =crf-train(Ul) at n=0, n=n+1 †. While not converged ˆp=posterior-decode(Uu n,Λ(o) n ) φ=smooth-posteriors(ˆp) using MTR, ˆq=interpolate-posteriors(ˆp,φ), Uu n=viterbi-decode(ˆq) Λ(t) n+1=crf-retrospective(Ul, Uu n,. . . ,Uu 1 ,Λ(t) n ) † (n):iteration, (t):target, (o):source domains. dictions, as follows: Λ(t) n+1 =argmin Λ∈RK ( −P j=1:l log p(sj|uj; Λ(t) n ) + µ∥Λ(t) n ∥2 ) ( −P j=1:(l+u) max{0, ˆp∗∗ n } ) (7) where, ˆp∗∗ n =1 −log hn(uj)ˆpn(s∗ j|uj; Λ(t) n ). The first two terms are same as standard CRF. The last term ensures that the predictions of the current model have the same sign as the predictions of the previous models (using labeled and unlabeled data), denoted by a maximum margin hinge weight, hn(uj)= 1 n−1 Pn−1 1 ˆpn(s∗ j|uj; Λ(t) n ). It should also be noted that with MTR, the R-SSL learns the word-tag relations by using features that describe the words in context, eliminating the need for additional type representation of graph-based model. MTR provides a separate probability distribution θj over tags for each utterance j, implicitly allowing for the same word v in separate utterances to differ in tag posteriors φkv. 5 Experiments 5.1 Datasets and Tagsets 5.1.1 Semantic Tagging Datasets We focus here on audiovisual media in the movie domain. The user is expected to interact by voice with a system than can perform a variety of tasks such as browsing, searching, querying information, etc. To build initial NLU models for such a dialog system, we used crowd-sourcing to collect and annotate utterances, which we consider our source domain. Given movie domain-specific tasks, we asked the crowd about how they would 919 interact with the media system as if they were talking to a person. Our data from target domain is internally collected from real-use scenarios of our spoken dialog system. The transcribed text forms of these utterances are obtained from speech recognition engine. Although the crowd-sourced data is similar to target domain, in terms of pre-defined user intentions, the target domain contains more descriptive vocabulary, which is almost twice as large as the source domain. This causes data-mismatch issues and hence provides a perfect test-bed for a domain adaptation task. In total, our corpus has a 40K semantically tagged utterances from each source and target domains. There are around 15 named-entity and 10 descriptive tags. We separated 5K utterances to test the performance of the semantic tagging models. The most frequent entities are: movie-director (’James Cameron’), movie-title (’Die Hard’), etc.; whereas top descriptive tags are: genre (’feel good’), description (’black and white’, ’pg 13’), review-rate (’epic’, ’not for me’), theater-location (’near me’,’city center’), etc. Unlabeled utterances similar to the movie domain are pulled from a month old web query logs and extracted over 2 million search queries from well-known sites, e.g., IMDB, Netflix, etc. We filtered queries that are similar to our target set that start with wh-phrases (’what’, ’who’, etc.) as well as imperatives ’show’, ’list’, etc. In addition, we extracted web n-grams and entity lists (see §3) from movie related web sites, and online blogs and reviews. We collected around 300K movie review and blog entries on the entities observed in our data. We extract prior distributions for entities and n-grams to calculate entity list η and word-tag β priors (see §3.1). 5.1.2 Syntactic Tagging Datasets We use the Wall Street Journal (WSJ) section of the Penn Treebank as our labeled source data. Following previous research, we train on sections 0018, comprised of 38,219 POS-tagged sentences. To evaluate the domain adaptation (DA) approach and to compare with results reported by (Subramanya et al., 2010), we use the first and second half of QuestionBank (Judge et al., 2006) as our development and test sets (target). The QuestionBank contains 4000 POS-tagged questions, however it is difficult to tag with WSJ-trained taggers because the word order is different than WSJ and contains a test-set vocabulary that is twice as large as the one in the development set. As for unlabeled data we crawled the web and collected around 100,000 questions that are similar in style and length to the ones in QuestionBank, e.g. ”wh” questions. There are 36 different tag sets in the Penn dataset which includes tag labels for verbs, nouns, adjectives, adverbs, modal, determiners, prepositions, etc. More information about the Penn Tree-bank tag set can be found here (Marcus et al., 1993). 5.2 Models We evaluated several baseline models on two tasks: 5.2.1 Semantic Clustering Since MTR provides a mixture of properties adapted from earlier models, we present performance benchmarks on tag clustering using: (i) LDA; (ii) Hidden Markov Topic Model HMTM (Gruber et al., 2005); and, (iii) w-LDA (Petterson et al., 2010) that uses word features as priors in LDA. When a uniform β hyper-parameter is used with no external information on the state transitions in MTR, it reduces to a HMTM model. Similarly, if no Markov properties are used (bag-ofwords), MTR reduces to w-LDA. Each topic model uses Gibbs sampling for inference and parameter learning. We sample models for 1000 iterations, with a 500-iteration burn-in and a sampling lag of 10. For testing we iterated the Gibbs sampler using the trained model for 10 iterations on the testing data. 5.2.2 SSL for Semantic/Syntactic Tagging We evaluated three different baselines against our SSL models: ⋆CRF: a standard supervised sequence tagging. ⋆Self-CRF: a wrapper method for SSL using self-training. First a supervised learning algorithm is used to build a CRF model based on the labeled data. A CRF model is used to decode the unlabeled data to generate more labeled examples for re-training. ⋆SSL-Graph: A SSL model presented in (Subramanya et al., 2010) that uses graph-based learning as posterior tag smoother for CRF model using Eq.(5). In addition to the three baseline, we evaluated three variations of our SSL method: ⋆SSL-MTR: Our first version of SSL uses MTR to 920 LDA w-LDA HMTM MTR 0.6 0.7 0.8 0.9 82% 77% 84% 82% 79% 78% 74% • Descriptive Tags ♦Named-Entities ■All Tags F-Measure Figure 2: F-measure for semantic clustering performance. Performance differences for three different baseline models and our MTR approach by different semantic tags. smooth the semantic tag posteriors of a unlabeled data decoded by the CRF model using Eq.(5). ⋆R-SSL-Graph: Our second version uses graph-learning to smooth the tag posteriors and retrain a new CRF model using retrospective SSL in Eq.(7). ⋆R-SSL-MTR: Our full model uses MTR as a Bayesian smoothing model, and retrospective SSL in Eq.(7) for iterative CRF training. For all the CRF models, we use lexical features consisting of unigrams in a five-word window around the current word. To include contextual information, we add binary features for all possible tags. We inject dictionary constraints to all CRF models, such as features indicating label prior information. For each model we use several named entity features, e.g., movie-title, actorname, etc., non-named entity (descriptive) features, e.g., movie-description, movie-genre, and domain independent dictionaries, e.g, time, location, etc. For graph-based learning, we implemented the algorithm presented in (Subramanya et al., 2010) and used the same hyper-parameters and features. For the rest of the hyper-parameters, we used: α=0.01 for MTR, π=0.5 for interpolation mixing. These parameters were chosen based on the performance of the development set. All CRF objective functions were optimized using Stochastic Gradient Descent. 5.3 Results and Discussions 5.3.1 Experiment 1: Clustering Semantic Tags. Here, we want to demonstrate the performance of MTR model for capturing relationships between words and semantic tags against baseline topic models: LDA, HMTM, w-LDA. We take the semantically labeled utterances from the movie target domain and use the first half for training and the rest for performance testing. We use all the collected unlabeled web queries from the movie domain. For fair comparison, each benchmark topic model is provided with prior information on word-semantic tag distributions based on the labeled training data, hence, each K latent topic is assigned to one of K semantic tags at the beginning of Gibbs sampling. We evaluate the performance separately on descriptive tags, named-entities, and all tags together. The performance of the four topic models are reported in Figure 2. LDA shows the worst performance, even though some supervision is provided by way of labeled semantic tags. Although w-LDA improves semantic clustering performance over LDA, the fact that it does not have Markov properties makes it fall short behind MTR. As for the effect of word features in MTR, we see a 3% absolute performance gain over the second best performing HMTM baseline on named-entity tags, a 1% absolute gain on descriptive tags and a 2% absolute overall gain. As expected, we see a drop in F-measure on all models on descriptive tags. 5.3.2 Experiment 2: Domain Adaptation Task. We compare the performance of our SSL model to that of state-of-the-art models on semantic and syntactic tagging. Each SSL model is built using labeled training data from the source domain and unlabeled training data from target domain. In Table 2 we show the results on Movie and QuestionBank target test datasets. The results of SSL-Graph on QuestionBank is taken from (Subramanya et al., 2010). The selftraining model, Self-CRF adds 3% improvement over supervised CRF models on movie domain, but does not improve syntactic tagging. Because it is always inherently biased towards the source domain, self-training tends to reinforce the knowledge that the supervised model already has. SSL-Graph works much better for both syntactic and semantic tagging compared to CRF and Self-CRF models. Our Bayesian MTR efficiently extracts information from the unlabeled data for the target domain. Combined with retrospective training, R-SSL-MTR demonstrates noticeable improvements, ∼2% on descriptive tags, and 1% absolute gains in overall semantic tag921 ging performance over SSL-Graph. On syntactic tagging, the two retrospective learning models is comparable, close to 1% improvement over the SSL-Graph and SSL-MTR. Movie Domain QBank Model Desc. NE All POS CRF 75.05 75.84 75.84 83.80 Self-CRF 78.96 79.53 79.19 84.00 SSL-Graph 80.27 81.35 81.23 86.80 SSL-MTR 79.87 79.31 79.19 86.30 R-SSL-Graph 80.58 81.95 81.52 87.12 R-SSL-MTR 82.76 82.27 82.24 87.34 Table 2: Domain Adaptation performance in F-measure on Semantic Tagging on Movie Target domain and POS tagging on QBank:QuestionBank. Best performing models are bolded. 5.3.3 Experiment 3: Analysis of Semantic Disambiguation. Here we focus on the accuracy of our models in tagging semantically ambiguous words. We investigate words that have more than one observed semantic tag in training data, such as ”are there any [war]genre movies available.”, ”remove all movies about [war]description.”). Our corpus contained 30,000 unique vocabulary, 55% of which are contained in one or more semantic categories. Only 6.5% of those are tagged as multiple categories (polysemous), which are the sources of semantic ambiguity. Table-3 shows the precision of two best models for most confused words. We compare our two best SSL models with different smoothing regularizes: R-SSL-MTR (MTR) and R-SSL-Graph (GRAPH). We use precision and recall criterion on semantically confused words. In Table 3 we show two most frequent descriptive tags; genre and description, and commonly misclassified words by the two models. Results indicate that the R-SSL-MTR, performs better than the R-SSL-Graph, in activating the correct meaning of a word. The results indicate that incorporating context information with MTR is an effective option for identifying semantic ambiguity. 6 Conclusions We have presented a novel semi supervised learning approach using a probabilistic clustering genre description Vocab. GRAPH MTR GRAPH MTR war 50% 100% 75% 88% popular 90% 89% 80% 100% kids 78% 86% − 100% crime 49% 80% 86% 67% zombie 67% 89% 67% 86% Table 3: Classification performance in F-measure for semantically ambiguous words on the most frequently confused descriptive tags in the movie domain. method to semantically tag spoken language utterances. Our results show that encoding priors on words and context information contributes significantly to the performance of semantic clustering. We have also described an efficient iterative learning model that can handle data inconsistencies that leads to performance increases in semantic and syntactic tagging. As a future work, we will investigate using session data, namely the entire dialog between the human and the computer. Rather than using single turn utterances, we hope to utilize the context information, e.g., information from previous turns for improving the performance of the semantic tagging of the current turns. References D. Blei, A. Ng, and M. Jordan. 2003. Latent dirichlet allocation. Journal of Machine Learning Research. J. Boyd-Graber, D. Blei, and X. Zhu. 2007. A topic model for word sense disambiguation. Proc. EMNLP. P.F. Brown, V.J.D. Pietra, P.V. deSouza, and J.C. Lai. 1992. Class-based n-gram models of natural language. Computational Linguistics, 18(4):467–479. O. Chapelle, B. Schlkopf, and Alexander Zien. 2006. Semi-supervised learning. MIT Press. H. Daum´e-III. 2010. Frustratingly easy semisupervised domain adaptation. Proc. Workshop on Domain Adaptation for Natural Language Processing at ACL. T.L Griffiths, M. Steyvers, D.M. Blei, and J.M. Tenenbaum. 2005. Integrating topics and syntax. Proc. of NIPS. A. Gruber, M. Rosen-Zvi, and Y. Weiss. 2005. Hidden topic markov models. Proc. of ICML. H. Guo, H. Zhu, Z. Guo, X. Zhang, X. Wu, and Z. Su. 2009. Domain adaptation with latent semantic association for named entity recognition. Proc. NAACL. 922 J. Judge, A. Cahill, and J.Van Genabith. 2006. Question-bank: Creating corpus of parse-annotated questions. Proc. Int. Conf. Computational Linguistics and ACL. A. Lavoie, M.E. Otey, N. Ratliff, and D. Sculley. 2011. History dependent domain adaptation. Proc. NIPS Workshop on Domain Adaptation. X. Li, Y.-Y. Wang, and A. Acero. 2009. Extracting structured information from user queries with semisupervised conditional random fields. Proc. of SIGIR. L. Li, B. Roth, and C. Sporleder. 2010. Topic models for word sense disambiguation and token-based idiom detection. Proc. ACL. X. Li. 2010. Understanding semantic structure of noun phrase queries. Proc. ACL. J Liu, X. Li, A. Acero, and Ye-Yi Wang. 2011. Lexicon modeling for query understanding. Proc. of ICASSP. M. P. Marcus, B. Santorini, and M.A. Marcinkiewicz. 1993. Building a large annotated corpus of english: The penn treebank. Computational Linguistics, 27:1–30. D. Mimno, W. Li, and A. McCallum. 2008. Topic models conditioned on arbitrary features with dirichlet-multinomial regression. Proc. UAI. T. Moon, K. Erk, and J. Baldridge. 2010. Crouching dirichlet, hidden markov model: Unsupervised pos tagging with context local tag generation. Proc. ACL. V.-A. Nyugen, J. Boyd-Graber, and P. Resnik. 2012. Sits: A hierarchical nonparametric model using speaker identity for topic segmentation in multiparty conversations. Proc. ACL. P. Pantel. 2003. Clustering by committee. Ph.D. Thesis, University of Alberta, Edmonton, Alta., Canada. J. Petterson, A. Smola, T. Caetano, W. Buntine, and S. Narayanamurthy. 2010. Word features for latent dirichlet allocation. In Proc. NIPS. J. Reisinger and R. Mooney. 2011. Cross-cutting models of lexical semantics. In Proc. of EMNLP. S. Singh, D. Hillard, and C. Leggetter. 2010. Minimally-supervised extraction of entities from text advertisements. Proc. NAACL-HLT. A. Stolcke. 2002. An extensible language modeling toolkit. Proc. Interspeech. A. Subramanya, S. Petrov, and F. Pereira. 2010. Efficient graph-based semi-supervised learning of structured tagging models. In Proc. EMNLP. G. Tur and R. DeMori. 2011. Spoken language understanding: Systems for extracting semantic information from speech. Wiley Press. Y.-Y. Wang, R. Hoffman, X. Li, and J. Syzmanski. 2009. Semi-supervised learning of semantic classes for query understanding from the web and for the web. In The 18th ACM Conference on Information and Knowledge Management. X. Zhu. 2005. Semi-supervised learning literature survey. Technical Report 1530, University of Wisconsin-Madison. 923
2013
90
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 924–932, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Parsing Graphs with Hyperedge Replacement Grammars David Chiang Information Sciences Institute University of Southern California Jacob Andreas Columbia University University of Cambridge Daniel Bauer Department of Computer Science Columbia University Karl Moritz Hermann Department of Computer Science University of Oxford Bevan Jones University of Edinburgh Macquarie University Kevin Knight Information Sciences Institute University of Southern California Abstract Hyperedge replacement grammar (HRG) is a formalism for generating and transforming graphs that has potential applications in natural language understanding and generation. A recognition algorithm due to Lautemann is known to be polynomial-time for graphs that are connected and of bounded degree. We present a more precise characterization of the algorithm’s complexity, an optimization analogous to binarization of contextfree grammars, and some important implementation details, resulting in an algorithm that is practical for natural-language applications. The algorithm is part of Bolinas, a new software toolkit for HRG processing. 1 Introduction Hyperedge replacement grammar (HRG) is a context-free rewriting formalism for generating graphs (Drewes et al., 1997), and its synchronous counterpart can be used for transforming graphs to/from other graphs or trees. As such, it has great potential for applications in natural language understanding and generation, and semantics-based machine translation (Jones et al., 2012). Figure 1 shows some examples of graphs for naturallanguage semantics. A polynomial-time recognition algorithm for HRGs was described by Lautemann (1990), building on the work of Rozenberg and Welzl (1986) on boundary node label controlled grammars, and others have presented polynomial-time algorithms as well (Mazanek and Minas, 2008; Moot, 2008). Although Lautemann’s algorithm is correct and tractable, its presentation is prefaced with the remark: “As we are only interested in distinguishing polynomial time from non-polynomial time, the analysis will be rather crude, and implementation details will be explicated as little as possible.” Indeed, the key step of the algorithm, which matches a rule against the input graph, is described at a very high level, so that it is not obvious (for a non-expert in graph algorithms) how to implement it. More importantly, this step as described leads to a time complexity that is polynomial, but potentially of very high degree. In this paper, we describe in detail a more efficient version of this algorithm and its implementation. We give a more precise complexity analysis in terms of the grammar and the size and maximum degree of the input graph, and we show how to optimize it by a process analogous to binarization of CFGs, following Gildea (2011). The resulting algorithm is practical and is implemented as part of the open-source Bolinas toolkit for hyperedge replacement grammars. 2 Hyperedge replacement grammars We give a short example of how HRG works, followed by formal definitions. 2.1 Example Consider a weighted graph language involving just two types of semantic frames (want and believe), two types of entities (boy and girl), and two roles (arg0 and arg1). Figure 1 shows a few graphs from this language. Figure 2 shows how to derive one of these graphs using an HRG. The derivation starts with a single edge labeled with the nonterminal symbol S . The first rewriting step replaces this edge with a subgraph, which we might read as “The 924 boy′ girl′ want′ arg0 arg1 boy′ believe′ arg1 want′ believe′ arg1 want′ arg1 girl′ arg0 boy′ arg0 arg1 arg0 Figure 1: Sample members of a graph language, representing the meanings of (clockwise from upper left): “The girl wants the boy,” “The boy is believed,” and “The boy wants the girl to believe that he wants her.” boy wants something (X) involving himself.” The second rewriting step replaces the X edge with another subgraph, which we might read as “The boy wants the girl to believe something (Y) involving both of them.” The derivation continues with a third rewriting step, after which there are no more nonterminal-labeled edges. 2.2 Definitions The graphs we use in this paper have edge labels, but no node labels; while node labels are intuitive for many graphs in NLP, using both node and edge labels complicates the definition of hyperedge grammar and algorithms. All of our graphs are directed (ordered), as the purpose of most graph structures in NLP is to model dependencies between entities. Definition 1. An edge-labeled, ordered hypergraph is a tuple H = ⟨V, E, ℓ⟩, where • V is a finite set of nodes • E ⊆V+ is a finite set of hyperedges, each of which connects one or more distinct nodes • ℓ: E →C assigns a label (drawn from the finite set C) to each edge. For brevity we use the terms graph and hypergraph interchangeably, and similarly for edge and hyperedge. In the definition of HRGs, we will use the notion of hypergraph fragments, which are the elementary structures that the grammar assembles into hypergraphs. Definition 2. A hypergraph fragment is a tuple ⟨V, E, ℓ, X⟩, where ⟨V, E, ℓ⟩is a hypergraph and X ∈V+ is a list of distinct nodes called the external nodes. The function of graph fragments in HRG is analogous to the right-hand sides of CFG rules and to elementary trees in tree adjoining grammars (Joshi and Schabes, 1997). The external nodes indicate how to integrate a graph into another graph during a derivation, and are analogous to foot nodes. In diagrams, we draw them with a black circle ( ). Definition 3. A hyperedge replacement grammar (HRG) is a tuple G = ⟨N, T, P, S ⟩where • N and T are finite disjoint sets of nonterminal and terminal symbols • S ∈N is the start symbol • P is a finite set of productions of the form A →R, where A ∈N and R is a graph fragment over N ∪T. We now describe the HRG rewriting mechanism. Definition 4. Given a HRG G, we define the relation H ⇒G H′ (or, H′ is derived from H in one step) as follows. Let e = (v1 · · · vk) be an edge in H with label A. Let (A →R) be a production of G, where R has external nodes XR = (u1 · · · uk). Then we write H ⇒G H′ if H′ is the graph formed by removing e from H, making an isomorphic copy of R, and identifying vi with (the copy of) ui for i = 1, . . . , k. Let H ⇒∗ G H′ (or, H′ is derived from H) be the reflexive, transitive closure of ⇒G. The graph language of a grammar G is the (possibly infinite) set of graphs H that have no edges with nonterminal labels such that S ⇒∗ G H. When a HRG rule (A →R) is applied to an edge e, the mapping of external nodes in R to the 925 1 X → believe′ arg1 girl′ arg0 1 Y 1 2 Y → 1 2 want′ arg0 arg1 S 1 boy′ X want′ arg1 arg0 2 believe′ arg1 want′ arg1 girl′ arg0 boy′ arg0 Y 3 want′ believe′ arg1 want′ arg1 girl′ arg0 boy′ arg0 arg1 arg0 Figure 2: Derivation of a hyperedge replacement grammar for a graph representing the meaning of “The boy wants the girl to believe that he wants her.” nodes of e is implied by the ordering of nodes in e and XR. When writing grammar rules, we make this ordering explicit by writing the left hand side of a rule as an edge and indexing the external nodes of R on both sides, as shown in Figure 2. HRG derivations are context-free in the sense that the applicability of each production depends on the nonterminal label of the replaced edge only. This allows us to represent a derivation as a derivation tree, and sets of derivations of a graph as a derivation forest (which can in turn represented as hypergraphs). Thus we can apply many of the methods developed for other context free grammars. For example, it is easy to define weighted and synchronous versions of HRGs. Definition 5. If K is a semiring, a K-weighted HRG is a tuple G = ⟨N, T, P, S, λ⟩, where ⟨N, T, P, S ⟩is a HRG and λ : P →K assigns a weight in K to each production. The weight of a derivation of G is the product of the weights of the productions used in the derivation. We defer a definition of synchronous HRGs until Section 4, where they are discussed in detail. 3 Parsing Lautemann’s recognition algorithm for HRGs is a generalization of the CKY algorithm for CFGs. Its key step is the matching of a rule against the input graph, analogous to the concatenation of two spans in CKY. The original description leaves open how this matching is done, and because it tries to match the whole rule at once, it has asymptotic complexity exponential in the number of nonterminal edges. In this section, we present a refinement that makes the rule-matching procedure explicit, and because it matches rules little by little, similarly to binarization of CFG rules, it does so more efficiently than the original. Let H be the input graph. Let n be the number of nodes in H, and d be the maximum degree of any node. Let G be a HRG. For simplicity, we assume that the right-hand sides of rules are connected. This restriction entails that each graph generated by G is connected; therefore, we assume that H is connected as well. Finally, let m be an arbitrary node of H called the marker node, whose usage will become clear below.1 3.1 Representing subgraphs Just as CKY deals with substrings (i, j] of the input, the HRG parsing algorithm deals with edgeinduced subgraphs I of the input. An edgeinduced subgraph of H = ⟨V, E, ℓ⟩is, for some 1To handle the more general case where H is not connected, we would need a marker for each component. 926 subset E′ ⊆E, the smallest subgraph containing all edges in E′. From now on, we will assume that all subgraphs are edge-induced subgraphs. In CKY, the two endpoints i and j completely specify the recognized part of the input, wi+1 · · · wj. Likewise, we do not need to store all of I explicitly. Definition 6. Let I be a subgraph of H. A boundary node of I is a node in I which is either a node with an edge in H\I or an external node. A boundary edge of I is an edge in I which has a boundary node as an endpoint. The boundary representation of I is the tuple ⟨bn(I), be(I, v), m ∈I⟩, where • bn(I) is the set of boundary nodes of I • be(I, v) be the set of boundary edges of v in I • (m ∈I) is a flag indicating whether the marker node is in I. The boundary representation of I suffices to specify I compactly. Proposition 1. If I and I′ are two subgraphs of H with the same boundary representation, then I = I′. Proof. Case 1: bn(I) is empty. If m ∈I and m ∈I′, then all edges of H must belong to both I and I′, that is, I = I′ = H. Otherwise, if m < I and m < I′, then no edges can belong to either I or I′, that is, I = I′ = ∅. Case 2: bn(I) is nonempty. Suppose I , I′; without loss of generality, suppose that there is an edge e that is in I \ I′. Let π be the shortest path (ignoring edge direction) that begins with e and ends with a boundary node. All the edges along π must be in I \ I′, or else there would be a boundary node in the middle of π, and π would not be the shortest path from e to a boundary node. Then, in particular, the last edge of π must be in I\I′. Since it has a boundary node as an endpoint, it must be a boundary edge of I, but cannot be a boundary edge of I′, which is a contradiction. □ If two subgraphs are disjoint, we can use their boundary representations to compute the boundary representation of their union. Proposition 2. Let I and J be two subgraphs whose edges are disjoint. A node v is a boundary node of I ∪J iffone of the following holds: (i) v is a boundary node of one subgraph but not the other (ii) v is a boundary node of both subgraphs, and has an edge which is not a boundary edge of either. An edge is a boundary edge of I ∪J iffit has a boundary node of I ∪J as an endpoint and is a boundary edge of I or J. Proof. (⇒) v has an edge in either I or J and an edge e outside both I and J. Therefore it must be a boundary node of either I or J. Moreover, e is not a boundary edge of either, satisfying condition (ii). (⇐) Case (i): without loss of generality, assume v is a boundary node of I. It has an edge e in I, and therefore in I ∪J, and an edge e′ outside I, which must also be outside J. For e < J (because I and J are disjoint), and if e′ ∈J, then v would be a boundary node of J. Therefore, e′ < I ∪J, so v is a boundary node of I ∪J. Case (ii): v has an edge in I and therefore I ∪J, and an edge not in either I or J. □ This result leads to Algorithm 1, which runs in time linear in the number of boundary nodes. Algorithm 1 Compute the union of two disjoint subgraphs I and J. for all v ∈bn(I) do E ←be(I, v) ∪be(J, v) if v < bn(J) or v has an edge not in E then add v to bn(I ∪J) be(I ∪J, v) ←E for all v ∈bn(J) do if v < bn(I) then add v to bn(I ∪J) be(I ∪J, v) ←be(I, v) ∪be(J, v) (m ∈I ∪J) ←(m ∈I) ∨(m ∈J) In practice, for small subgraphs, it may be more efficient simply to use an explicit set of edges instead of the boundary representation. For the GeoQuery corpus (Tang and Mooney, 2001), whose graphs are only 7.4 nodes on average, we generally find this to be the case. 3.2 Treewidth Lautemann’s algorithm tries to match a rule against the input graph all at once. But we can optimize the algorithm by matching a rule incrementally. This is analogous to the rank-minimization problem for linear context-free rewriting systems. Gildea has shown that this problem is related to 927 the notion of treewidth (Gildea, 2011), which we review briefly here. Definition 7. A tree decomposition of a graph H = ⟨V, E⟩is a tree T, each of whose nodes η is associated with sets Vη ⊆V and Eη ⊆E, with the following properties: 1. Vertex cover: For each v ∈V, there is a node η ∈T such that v ∈Vη. 2. Edge cover: For each e = (v1 · · · vk) ∈E, there is exactly one node η ∈T such that e ∈ Eη. We say that η introduces e. Moreover, v1, . . . , vk ∈Vη. 3. Running intersection: For each v ∈V, the set {η ∈T | v ∈Vη} is connected. The width of T is max |Vη| −1. The treewidth of H is the minimal width of any tree decomposition of H. A tree decomposition of a graph fragment ⟨V, E, X⟩is a tree decomposition of ⟨V, E⟩that has the additional property that all the external nodes belong to Vη for some η. (Without loss of generality, we assume that η is the root.) For example, Figure 3b shows a graph, and Figure 3c shows a tree decomposition. This decomposition has width three, because its largest node has 4 elements. In general, a tree has width one, and it can be shown that a graph has treewidth at most two iffit does not have the following graph as a minor (Bodlaender, 1997): K4 = Finding a tree decomposition with minimal width is in general NP-hard (Arnborg et al., 1987). However, we find that for the graphs we are interested in in NLP applications, even a na¨ıve algorithm gives tree decompositions of low width in practice: simply perform a depth-first traversal of the edges of the graph, forming a tree T. Then, augment the Vη as necessary to satisfy the running intersection property. As a test, we extracted rules from the GeoQuery corpus (Tang and Mooney, 2001) using the SynSem algorithm (Jones et al., 2012), and computed tree decompositions exactly using a branchand-bound method (Gogate and Dechter, 2004) and this approximate method. Table 1 shows that, in practice, treewidths are not very high even when computed only approximately. method mean max exact 1.491 2 approximate 1.494 3 Table 1: Mean and maximum treewidths of rules extracted from the GeoQuery corpus, using exact and approximate methods. (a) 0 a believe′ arg1 b girl′ arg0 1 Y (b) 0 1 0 b 1 0 a b 1 arg1 a b 1 Y ∅ 0 b arg0 b girl′ ∅ 0 believe′ ∅ Figure 3: (a) A rule right-hand side, and (b) a nice tree decomposition. Any tree decomposition can be converted into one which is nice in the following sense (simplified from Cygan et al. (2011)). Each tree node η must be one of: • A leaf node, such that Vη = ∅. • A unary node, which introduces exactly one edge e. • A binary node, which introduces no edges. The example decomposition in Figure 3c is nice. This canonical form simplifies the operation of the parser described in the following section. Let G be a HRG. For each production (A → R) ∈G, find a nice tree decomposition of R and call it TR. The treewidth of G is the maximum 928 treewidth of any right-hand side in G. The basic idea of the recognition algorithm is to recognize the right-hand side of each rule incrementally by working bottom-up on its tree decomposition. The properties of tree decomposition allow us to limit the number of boundary nodes of the partially-recognized rule. More formally, let R⊵η be the subgraph of R induced by the union of Eη′ for all η′ equal to or dominated by η. Then we can show the following. Proposition 3. Let R be a graph fragment, and assume a tree decomposition of R. All the boundary nodes of R⊵η belong to Vη ∩Vparent(η). Proof. Let v be a boundary node of R⊵η. Node v must have an edge in R⊵η and therefore in Rη′ for some η′ dominated by or equal to η. Case 1: v is an external node. Since the root node contains all the external nodes, by the running intersection property, both Vη and Vparent(η) must contain v as well. Case 2: v has an edge not in R⊵η. Therefore there must be a tree node not dominated by or equal to η that contains this edge, and therefore v. So by the running intersection property, η and its parent must contain v as well. □ This result, in turn, will allow us to bound the complexity of the parsing algorithm in terms of the treewidth of G. 3.3 Inference rules We present the parsing algorithm as a deductive system (Shieber et al., 1995). The items have one of two forms. A passive item has the form [A, I, X], where X ∈V+ is an explicit ordering of the boundary nodes of I. This means that we have recognized that A ⇒∗ G I. Thus, the goal item is [S, H, ǫ]. An active item has the form [A →R, η, I, φ], where • (A →R) is a production of G • η is a node of TR • I is a subgraph of H • φ is a bijection between the boundary nodes of R⊵η and those of I. The parser must ensure that φ is a bijection when it creates a new item. Below, we use the notation {e 7→e′} or {e 7→X} for the mapping that sends each node of e to the corresponding node of e′ or X. Passive items are generated by the following rule: • Root [B →Q, θ, J, ψ] [B, J, X] where θ is the root of TQ, and Xj = ψ(XQ,j). If we assume that the TR are nice, then the inference rules that generate active items follow the different types of nodes in a nice tree decomposition: • Leaf [A →R, η, ∅, ∅] where η is a leaf node of TR. • (Unary) Nonterminal [A →R, η1, I, φ] [B, J, X] [A →R, η, I ∪J, φ ∪{e 7→X}] where η1 is the only child of η, and e is introduced by η and is labeled with nonterminal B. • (Unary) Terminal [A →R, η1, I, φ] [A →R, η, I ∪{e′}, φ ∪{e 7→e′}] where η1 is the only child of η, e is introduced by η, and e and e′ are both labeled with terminal a. • Binary [A →R, η1, I, φ1] [A →R, η2, J, φ2] [A →R, η, I ∪J, φ1 ∪φ2] where η1 and η2 are the two children of η. In the Nonterminal, Terminal, and Binary rules, we form unions of subgraphs and unions of mappings. When forming the union of two subgraphs, we require that the subgraphs be disjoint (however, see Section 3.4 below for a relaxation of this condition). When forming the union of two mappings, we require that the result be a bijection. If either of these conditions is not met, the inference rule cannot apply. For efficiency, it is important to index the items for fast access. For the Nonterminal inference rule, passive items [B, J, X] should be indexed by key ⟨B, |bn(J)|⟩, so that when the next item on the agenda is an active item [A →R, η1, I, φ], we know that all possible matching passive items are 929 S → X X X X → a a a a a (a) (b) a a a a a a (c) Figure 4: Illustration of unsoundness in the recognition algorithm without the disjointness check. Using grammar (a), the recognition algorithm would incorrectly accept the graph (b) by assembling together the three overlapping fragments (c). under key ⟨ℓ(e), |e|⟩. Similarly, active items should be indexed by key ⟨ℓ(e), |e|⟩so that they can be found when the next item on the agenda is a passive item. For the Binary inference rule, active items should be indexed by their tree node (η1 or η2). This procedure can easily be extended to produce a packed forest of all possible derivations of the input graph, representable as a hypergraph just as for other context-free rewriting formalisms. The Viterbi algorithm can then be applied to this representation to find the highest-probability derivation, or the Inside/Outside algorithm to set weights by Expectation-Maximization. 3.4 The disjointness check A successful proof using the inference rules above builds an HRG derivation (comprising all the rewrites used by the Nonterminal rule) which derives a graph H′, as well as a graph isomorphism φ : H′ →H (the union of the mappings from all the items). During inference, whenever we form the union of two subgraphs, we require that the subgraphs be disjoint. This is a rather expensive operation: it can be done using only their boundary representations, but the best algorithm we are aware of is still quadratic in the number of boundary nodes. Is it possible to drop the disjointness check? If we did so, it would become possible for the algorithm to recognize the same part of H twice. For example, Figure 4 shows an example of a grammar and an input that would be incorrectly recognized. However, we can replace the disjointness check with a weaker and faster check such that any derivation that merges two non-disjoint subgraphs will ultimately fail, and therefore the derived graph H′ is isomorphic to the input graph H′ as desired. This weaker check is to require, when merging two subgraphs I and J, that: 1. I and J have no boundary edges in common, and 2. If m belongs to both I and J, it must be a boundary node of both. Condition (1) is enough to guarantee that φ is locally one-to-one in the sense that for all v ∈H′, φ restricted to v and its neighbors is one-to-one. This is easy to show by induction: if φI : I′ →H and φJ : J′ →H are locally one-to-one, then φI ∪φJ must also be, provided condition (1) is met. Intuitively, the consequence of this is that we can detect any place where φ changes (say) from being one-to-one to two-to-one. So if φ is two-to-one, then it must be two-to-one everywhere (as in the example of Figure 4). But condition (2) guarantees that φ maps only one node to the marker m. We can show this again by induction: if φI and φJ each map only one node to m, then φI∪φJ must map only one node to m, by a combination of condition (2) and the fact that the inference rules guarantee that φI, φJ, and φI ∪φJ are one-to-one on boundary nodes. Then we can show that, since m is recognized exactly once, the whole graph is also recognized exactly once. Proposition 4. If H and H′ are connected graphs, φ : H′ →H is locally one-to-one, and φ−1 is defined for some node of H, then φ is a bijection. Proof. Suppose that φ is not a bijection. Then there must be two nodes v′ 1, v′ 2 ∈H′ such that φ(v′ 1) = φ(v′ 2) = v ∈H. We also know that there is a node, namely, m, such that m′ = φ−1(m) is defined.2 Choose a path π (ignoring edge direction) from v to m. Because φ is a local isomorphism, we can construct a path from v′ 1 to m′ that maps to π. Similarly, we can construct a path from v′ 2 to m′ that maps to π. Let u′ be the first node that these two paths have in common. But u′ must have two edges that map to the same edge, which is a contradiction. □ 2If H were not connected, we would choose the marker in the same connected component as v. 930 3.5 Complexity The key to the efficiency of the algorithm is that the treewidth of G leads to a bound on the number of boundary nodes we must keep track of at any time. Let k be the treewidth of G. The time complexity of the algorithm is the number of ways of instantiating the inference rules. Each inference rule mentions only boundary nodes of R⊵η or R⊵ηi, all of which belong to Vη (by Proposition 3), so there are at most |Vη| ≤k + 1 of them. In the Nonterminal and Binary inference rules, each boundary edge could belong to I or J or neither. Therefore, the number of possible instantiations of any inference rule is in O((3dn)k+1). The space complexity of the algorithm is the number of possible items. For each active item [A →R, η, I, φ], every boundary node of R⊵η must belong to Vη ∩Vparent(η) (by Proposition 3). Therefore the number of boundary nodes is at most k+1 (but typically less), and the number of possible items is in O((2dn)k+1). 4 Synchronous Parsing As mentioned in Section 2.2, because HRGs have context-free derivation trees, it is easy to define synchronous HRGs, which define mappings between languages of graphs. Definition 8. A synchronous hyperedge replacement grammar (SHRG) is a tuple G = ⟨N, T, T ′, P, S ⟩, where • N is a finite set of nonterminal symbols • T and T ′ are finite sets of terminal symbols • S ∈N is the start symbol • P is a finite set of productions of the form (A →⟨R, R′, ∼⟩), where R is a graph fragment over N ∪T and R′ is a graph fragment over N ∪T ′. The relation ∼is a bijection linking nonterminal mentions in R and R′, such that if e ∼e′, then they have the same label. We call R the source side and R′ the target side. Some NLP applications (for example, word alignment) require synchronous parsing: given a pair of graphs, finding the derivation or forest of derivations that simultaneously generate both the source and target. The algorithm to do this is a straightforward generalization of the HRG parsing algorithm. For each rule (A →⟨R, R′, ∼⟩), we construct a nice tree decomposition of R∪R′ such that: • All the external nodes of both R and R′ belong to Vη for some η. (Without loss of generality, assume that η is the root.) • If e ∼e′, then e and e′ are introduced by the same tree node. In the synchronous parsing algorithm, passive items have the form [A, I, X, I′, X′] and active items have the form [A →R : R′, η, I, φ, I′, φ′]. For brevity we omit a re-presentation of all the inference rules, as they are very similar to their nonsynchronous counterparts. The main difference is that in the Nonterminal rule, two linked edges are rewritten simultaneously: [A →R : R′, η1, I, φ, I′, φ′] [B, J, X, J′, X′] [A →R : R′, η, I ∪J, φ ∪{e j 7→Xj}, I′ ∪J′, φ′ ∪{e′ j 7→X′ j}] where η1 is the only child of η, e and e′ are both introduced by η and e ∼e′, and both are labeled with nonterminal B. The complexity of the parsing algorithm is again in O((3dn)k+1), where k is now the maximum treewidth of the dependency graph as defined in this section. In general, this treewidth will be greater than the treewidth of either the source or target side on its own, so that synchronous parsing is generally slower than standard parsing. 5 Conclusion Although Lautemann’s polynomial-time extension of CKY to HRGs has been known for some time, the desire to use graph grammars for large-scale NLP applications introduces some practical considerations not accounted for in Lautemann’s original presentation. We have provided a detailed description of our refinement of his algorithm and its implementation. It runs in O((3dn)k+1) time and requires O((2dn)k+1) space, where n is the number of nodes in the input graph, d is its maximum degree, and k is the maximum treewidth of the rule right-hand sides in the grammar. We have also described how to extend this algorithm to synchronous parsing. The parsing algorithms described in this paper are implemented in the Bolinas toolkit.3 3The Bolinas toolkit can be downloaded from ⟨http://www.isi.edu/licensed-sw/bolinas/⟩. 931 Acknowledgements We would like to thank the anonymous reviewers for their helpful comments. This research was supported in part by ARO grant W911NF-10-1-0533. References Stefan Arnborg, Derek G. Corneil, and Andrzej Proskurowski. 1987. Complexity of finding embeddings in a k-tree. SIAM Journal on Algebraic and Discrete Methods, 8(2). Hans L. Bodlaender. 1997. Treewidth: Algorithmic techniques and results. In Proc. 22nd International Symposium on Mathematical Foundations of Computer Science (MFCS ’97), pages 29–36, Berlin. Springer-Verlag. Marek Cygan, Jesper Nederlof, Marcin Pilipczuk, Michał Pilipczuk, Johan M. M. van Rooij, and Jakub Onufry Wojtaszczyk. 2011. Solving connectivity problems parameterized by treewidth in single exponential time. Computing Research Repository, abs/1103.0534. Frank Drewes, Hans-J¨org Kreowski, and Annegret Habel. 1997. Hyperedge replacement graph grammars. In Grzegorz Rozenberg, editor, Handbook of Graph Grammars and Computing by Graph Transformation, pages 95–162. World Scientific. Daniel Gildea. 2011. Grammar factorization by tree decomposition. Computational Linguistics, 37(1):231–248. Vibhav Gogate and Rina Dechter. 2004. A complete anytime algorithm for treewidth. In Proceedings of the Conference on Uncertainty in Artificial Intelligence. Bevan Jones, Jacob Andreas, Daniel Bauer, Karl Moritz Hermann, and Kevin Knight. 2012. Semantics-based machine translation with hyperedge replacement grammars. In Proc. COLING. Aravind K. Joshi and Yves Schabes. 1997. Treeadjoining grammars. In Grzegorz Rozenberg and Arto Salomaa, editors, Handbook of Formal Languages and Automata, volume 3, pages 69–124. Springer. Clemens Lautemann. 1990. The complexity of graph languages generated by hyperedge replacement. Acta Informatica, 27:399–421. Steffen Mazanek and Mark Minas. 2008. Parsing of hyperedge replacement grammars with graph parser combinators. In Proc. 7th International Workshop on Graph Transformation and Visual Modeling Techniques. Richard Moot. 2008. Lambek grammars, tree adjoining grammars and hyperedge replacement grammars. In Proc. TAG+9, pages 65–72. Grzegorz Rozenberg and Emo Welzl. 1986. Boundary NLC graph grammars—basic definitions, normal forms, and complexity. Information and Control, 69:136–167. Stuart M. Shieber, Yves Schabes, and Fernando C. N. Pereira. 1995. Principles and implementation of deductive parsing. Journal of Logic Programming, 24:3–36. Lappoon Tang and Raymond Mooney. 2001. Using multiple clause constructors in inductive logic programming for semantic parsing. In Proc. European Conference on Machine Learning. 932
2013
91
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 933–943, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Grounded Unsupervised Semantic Parsing Hoifung Poon One Microsoft Way Microsoft Research Redmond, WA 98052, USA [email protected] Abstract We present the first unsupervised approach for semantic parsing that rivals the accuracy of supervised approaches in translating natural-language questions to database queries. Our GUSP system produces a semantic parse by annotating the dependency-tree nodes and edges with latent states, and learns a probabilistic grammar using EM. To compensate for the lack of example annotations or question-answer pairs, GUSP adopts a novel grounded-learning approach to leverage database for indirect supervision. On the challenging ATIS dataset, GUSP attained an accuracy of 84%, effectively tying with the best published results by supervised approaches. 1 Introduction Semantic parsing maps text to a formal meaning representation such as logical forms or structured queries. Recently, there has been a burgeoning interest in developing machine-learning approaches for semantic parsing (Zettlemoyer and Collins, 2005; Zettlemoyer and Collins, 2007; Mooney, 2007; Kwiatkowski et al., 2011), but the predominant paradigm uses supervised learning, which requires example annotations that are costly to obtain. More recently, several groundedlearning approaches have been proposed to alleviate the annotation burden (Chen and Mooney, 2008; Kim and Mooney, 2010; B¨orschinger et al., 2011; Clarke et al., 2010; Liang et al., 2011). In particular, Clarke et al. (2010) and Liang et al. (2011) proposed methods to learn from questionanswer pairs alone, which represents a significant advance. However, although these methods exonerate annotators from mastering specialized logical forms, finding the answers for complex questions still requires non-trivial effort. 1 Poon & Domingos (2009, 2010) proposed the USP system for unsupervised semantic parsing, which learns a parser by recursively clustering and composing synonymous expressions. While their approach completely obviates the need for direct supervision, their target logic forms are selfinduced clusters, which do not align with existing database or ontology. As a result, USP can not be used directly to answer complex questions against an existing database. More importantly, it misses the opportunity to leverage database for indirect supervision. In this paper, we present the GUSP system, which combines unsupervised semantic parsing with grounded learning from a database. GUSP starts with the dependency tree of a sentence and produces a semantic parse by annotating the nodes and edges with latent semantic states derived from the database. Given a set of natural-language questions and a database, GUSP learns a probabilistic semantic grammar using EM. To compensate for the lack of direct supervision, GUSP constrains the search space using the database schema, and bootstraps learning using lexical scores computed from the names and values of database elements. Unlike previous grounded-learning approaches, GUSP does not require ambiguous annotations or oracle answers, but rather focuses on leveraging database contents that are readily available. Unlike USP, GUSP predetermines the target logical forms based on the database schema, which alleviates the difficulty in learning and ensures that the output semantic parses can be directly used in querying the database. To handle syntax-semantics mismatch, GUSP introduces a novel dependency-based meaning representation 1Clarke et al. (2010) and Liang et al. (2011) used the annotated logical forms to compute answers for their experiments. 933 by augmenting the state space to represent semantic relations beyond immediate dependency neighborhood. This representation also factorizes over nodes and edges, enabling linear-time exact inference in GUSP. We evaluated GUSP on end-to-end question answering using the ATIS dataset for semantic parsing (Zettlemoyer and Collins, 2007). Compared to other standard datasets such as GEO and JOBS, ATIS features a database that is an order of magnitude larger in the numbers of relations and instances, as well as a more irregular language (ATIS questions were derived from spoken dialogs). Despite these challenges, GUSP attains an accuracy of 84% in end-to-end question answering, effectively tying with the stateof-the-art supervised approaches (85% by Zettlemoyer & Collins (2007), 83% by Kwiatkowski et al. (2011)). 2 Background 2.1 Semantic Parsing The goal of semantic parsing is to map text to a complete and detailed meaning representation (Mooney, 2007). This is in contrast with semantic role labeling (Carreras and Marquez, 2004) and information extraction (Banko et al., 2007; Poon and Domingos, 2007), which have a more restricted goal of identifying local semantic roles or extracting selected information slots. The standard language for meaning representation is first-order logic or a sublanguage, such as FunQL (Kate et al., 2005; Clarke et al., 2010) and lambda calculus (Zettlemoyer and Collins, 2005; Zettlemoyer and Collins, 2007). Poon & Domingos (2009, 2010) induce a meaning representation by clustering synonymous lambda-calculus forms stemming from partitions of dependency trees. More recently, Liang et al. (2011) proposed DCS for dependency-based compositional semantics, which represents a semantic parse as a tree with nodes representing database elements and operations, and edges representing relational joins. In this paper, we focus on semantic parsing for natural-language interface to database (Grosz et al., 1987). In this problem setting, a naturallanguage question is first translated into a meaning representation by semantic parsing, and then converted into a structured query such as SQL to obtain answer from the database. 2.2 Unsupervised Semantic Parsing Unsupervised semantic parsing was first proposed by Poon & Domingos (2009, 2010) with their USP system. USP defines a probabilistic model over the dependency tree and semantic parse using Markov logic (Domingos and Lowd, 2009), and recursively clusters and composes synonymous dependency treelets using a hard EM-like procedure. Since USP uses nonlocal features (e.g., the argument-number feature) and operates over partitions, exact inference is intractable, and USP resorts to a greedy approach to find the MAP parse by searching over partitions. Titov & Klementiev (2011) proposed a Bayesian version of USP and Titov & Klementiev (2012) adapted it for semantic role induction. In USP, the meaning is represented by self-induced clusters. Therefore, to answer complex questions against a database, it requires an additional ontology matching step to resolve USP clusters with database elements. Popescu et al. (2003, 2004) proposed the PRECISE system, which does not require labeled examples and can be directly applied to question answering with a database. The PRECISE system, however, requires substantial amount of engineering, including a domain-specific lexicon that specifies the synonyms for names and values of database elements, a restricted set of potential interpretations for domain verbs and prepositions, as well as a set of domain questions with manually labeled POS tags for retraining the tagger and parser. It also focuses on the subset of easy questions (“semantically tractable” questions), and sidesteps the problem of dealing with complex and nested structures, as well as ambiguous interpretations. Remarkably, while PRECISE can be very accurate on easy questions, it does not try to learn from these interpretations. In contrast, Goldwasser et al. (2011) proposed a self-supervised approach, which iteratively chose high-confidence parses to retrain the parser. Their system, however, still required a lexicon manually constructed for the given domain. Moreover, it was only applied to a small domain (a subset of GEO), and the result still trailed supervised systems by a wide margin. 2.3 Grounded Learning for Semantic Parsing Grounded learning is motivated by alleviating the burden of direct supervision via interaction with the world, where the indirect supervision may take the form as ambiguous annotations (Chen 934 get toronto flight from to diego in san stopping dtw E:flight:R E:flight V:city.name V:city.name:C E:flight_stop V:airport.code V:city.name + E:flight Figure 1: End-to-end question answering by GUSP for sentence get flight from toronto to san diego stopping in dtw. Top: the dependency tree of the sentence is annotated with latent semantic states by GUSP. For brevity, we omit the edge states. Raising occurs from flight to get and sinking occurs from get to diego. Bottom: the semantic tree is deterministically converted into SQL to obtain answer from the database. and Mooney, 2008; Kim and Mooney, 2010; B¨orschinger et al., 2011) or example questionanswer pairs (Clarke et al., 2010; Liang et al., 2011). In general, however, such supervision is not always available or easy to obtain. In contrast, databases are often abundantly available, especially for important domains. The database community has considerable amount of work on leveraging databases in various tasks such as entity resolution, schema matching, and others. To the best of our knowledge, this approach is still underexplored in the NLP community. One notable exception is distant supervision (Mintz et al., 2009; Riedel et al., 2010; Hoffmann et al., 2011; Krishnamurthy and Mitchell, 2012; Heck et al., 2013), which used database instances to derive training examples for relation extraction. This approach, however, still has considerable limitations. For example, it only handles binary relations, and the quality of the training examples is inherently noisy and hard to control. Moreover, this approach is not applicable to the questionanswering setting considered in this paper, since entity pairs in questions need not correspond to valid relational instances in the database. 3 Grounded Unsupervised Semantic Parsing In this section, we present the GUSP system for grounded unsupervised semantic parsing. GUSP is unsupervised and does not require example logical forms or question-answer pairs. Figure 1 shows an example of end-to-end question answering using GUSP. GUSP produces a semantic parse of the question by annotating its dependency tree with latent semantic states. The semantic tree can then be deterministically converted into SQL to obtain answer from the database. Given a set of natural-language questions and a database, GUSP learns a probabilistic semantic grammar using EM. To compensate for the lack of annotated examples, GUSP derives indirect supervision from a novel combination of three key sources. First, GUSP leverages the target database to constrain the search space. Specifically, it defines the semantic states based on the database schema, and derives lexical-trigger scores from database elements to bootstrap learning. Second, in contrast to most existing approaches for semantic parsing, GUSP starts directly from dependency trees and focuses on translating them into semantic parses. While syntax may not always align perfectly with semantics, it is still highly informative about the latter. In particular, dependency edges are often indicative of semantic relations. On the other hand, syntax and semantic often diverge, and synactic parsing errors abound. To combat this problem, GUSP introduces a novel dependency-based meaning representation with an augmented state space to account for semantic relations that are nonlocal in the dependency tree. GUSP’s approach of starting directly from dependency tree is inspired by USP. However, GUSP uses a different meaning representation defined over individual nodes and edges, rather than partitions, which enables linear-time exact inference. GUSP also handles complex linguistic phenomena and syntax-semantics mismatch by explicitly augmenting the state space, whereas USP’s capability in handling such phenomena is indirect and more limited. GUSP represents meaning by a semantic tree, which is similar to DCS (Liang et al., 2011). Their approach to semantic parsing, however, differs from GUSP in that it induced the semantic tree directly from a sentence, rather than starting from 935 a dependency tree and annotating it. Their approach alleviates some complexity in the meaning representation for handling syntax-semantics mismatch, but it has to search over a much larger search space involving exponentially many candidate trees. This might partially explain why it has not yet been scaled up to the ATIS dataset. Finally, GUSP recognizes that certain aspects in semantic parsing may not be worth learning using precious annotated examples. These are domain-independent and closed-class expressions, such as times and dates (e.g., before 5pm and July seventeenth), logical connectives (e.g., and, or, not), and numerics (e.g., 200 dollars). GUSP preprocesses the text to detect such expressions and restricts their interpretation to database elements of compatible types (e.g., before 5pm vs. flight.departure time or flight.arrival time). Short of training examples, GUSP also resolves quantifier scoping ambiguities deterministically by a fixed ordering. For example, in the phrase cheapest flight to Seattle, the scope of cheapest can be either flight or flight to seattle. GUSP always chooses to apply the superlative at last, amounting to choosing the most restricted scope (flight to seattle), which is usually the correct interpretation. In the remainder of this section, we first formalize the problem setting and introduce the GUSP meaning representation. We then present the GUSP model and learning and inference algorithms. Finally, we describe how to convert a GUSP semantic parse into SQL. 3.1 Problem Formulation Let d be a dependency tree, N(d) and E(d) be its nodes and edges. In GUSP, a semantic parse of d is an assignment z : N(d) ∪E(d) →S that maps its nodes and edges to semantic states in S. For example, in the example in Figure 1, z(flight) = E : flight. At the core of GUSP is a joint probability distribution Pθ(d, z) over the dependency tree and the semantic parse. Semantic parsing in GUSP amounts to finding the most probable parse z∗= arg maxz Pθ(d, z). Given a set of sentences and their dependency trees D, learning in GUSP maximizes the log-likelihood of D while summing out the latent parses z: θ∗= arg max log Pθ(D) = arg max X d∈D log X z Pθ(d, z) 3.2 Simple Semantic States Node states GUSP creates a state E:X (E short for entity) for each database entity X (i.e., a database table), a state P:Y (P short for property) and V:Y (V short for value) for each database attribute Y (i.e., a database column). Node states are assigned to dependency nodes. Intuitively, they represent database entities, properties, and values. For example, the ATIS domain contains entities such as flight and fare, which may contain properties such as the departure time flight.departure time or ticket price fare.one direction cost. The mentions of entities and properties are represented by entity and property states, whereas constants such as 9:25am or 120 dollars are represented by value states. In the semantic parse in Figure 1, for example, flight is assigned to entity state E:flight, where toronto is assigned to value state V:city.name. There is a special node state NULL, which signifies that the subtree headed by the word contributes no meaning to the semantic parse (e.g., an auxilliary verb). Edge states GUSP creates an edge state for each valid relational join paths connecting two node states. Edge states are assigned to dependency edges. GUSP enforces the constraints that the node states of the dependency parent and child must agree with the node states in the edge state. For example, E:flight-V:flight.departure time represents a natural join between the flight entity and the property value departure time. For a dependency edge e : a →b, the assignment to E:flight-V:flight.departure time signifies that a represents a flight entity, and b represents the value of its departure time. An edge state may also represent a relational path consisting of a serial of joins. For example, Zettlemoyer and Collins (2007) used a predicate from(f,c) to signify that flight f starts from city c. In the ATIS database, however, this amounts to a path of three joins: flight.from airport-airport airport-airport service airport service-city In GUSP, this is represented by the edge state flight-flight.from airport-airport-airport service-city. 936 GUSP only creates edge states for relational join paths up to length four, as longer paths rarely correspond to meaningful semantic relations. Composition To handle compositions such as American Airlines and New York City, it helps to distinguish the head words (Airlines and City) from the rest. In GUSP, this is handled by introducing, for each node state such as E:airline, a new node state such as E:airline:C, where C signifies composition. For example, in Figure 1, diego is assigned to V:city.name, whereas san is assigned to V:city.name:C, since san diego forms a single meaning unit, and should be translated into SQL as a whole. 3.3 Domain-Independent States These are for handling special linguistic phenomena that are not domain-specific, such as negation, superlatives, and quantifiers. Operator states GUSP create node states for the logical and comparison operators (OR, AND, NOT, MORE, LESS, EQ). Additionally, to handle the cases when prepositions and logical connectives are collapsed into the label of a dependency edge, as in Stanford dependency, GUSP introduces an edge state for each triple of an operator and two node states, such as E:flight-AND-E:fare. Quantifier states GUSP creates a node state for each of the standard SQL functions: argmin, argmax, count, sum. Additionally, it creates a node state for each pair of compatible function and property. For example, argmin can be applied to any numeric property, in particular flight.departure time, and so the node state P:flight.departure time:argmin is created and can be assigned to superlatives such as earliest. 3.4 Complex Semantic States For sentences with a correct dependency tree and well-aligned syntax and semantics, the simple semantic states suffice for annotating the correct semantic parse. However, in complex sentences, syntax and semantic often diverge, either due to their differing goals or simply stemming from syntactic parsing errors. In Figure 1, the dependency tree contains multiple errors: from toronto and to san diego are mistakenly attached to get, which has no literal meaning here; stopping in dtw is also wrongly attached to diego rather than flight. Annotating such a tree with only simple states will lead to incorrect semantic parses, e.g., by joining V:city:san diego with V:airport:dtw via E:airport service, rather than joining E:flight with V:airport:dtw via E:flight stop. To overcome these challenges, GUSP introduces three types of complex states to handle syntax-semantics divergence. Figure 1 shows the correct semantic parse for the above sentence using the complex states. Raising For each simple node state N, GUSP creates a “raised” state N:R (R short for raised). A raised state signifies a word that has little or none of its own meaning, but effectively takes one of its child states to be its own (“raises”). Correspondingly, GUSP creates a “raising” edge state N-R-N, which signifies that the parent is a raised state and its meaning is derived from the dependency child of state N. For all other children, the parent behaves just as state N. For example, in Figure 1, get is assigned to the raised state E:flight:R, and the edge between get and flight is assigned to the raising edge state E:flight-R-E:flight. Sinking For simple node states A, B and an edge state E connecting the two, GUSP creates a “sinking” node state A+E+B:S (S for sinking). When a node n is assigned to such a sinking state, n can behave as either A or B for its children (i.e., the edge states can connect to either one), and n’s parent must be of state B. In Figure 1, for example, diego is assigned to a sinking state V:city.name + E:flight (the edge state is omitted for brevity). E:flight comes from its parent get. For child san, diego behaves as in state V:city.name, and their edge state is a simple compositional join. For the other child stopping, diego behaves as in state E:flight, and their edge state is a relational join connecting flight with flight stop. Effectively, this connects stopping with get and eventually with flight (due to raising), virtually correcting the syntax-semantics mismatch stemming from attachment errors. Implicit For simple node states A, B and an edge state E connecting the two, GUSP also creates a node state A+E+B:I (I for implicit) with the “implicit” state B. In natural languages, an entity is often introduced implicitly, which the reader infers from shared world knowledge. For example, 937 to obtain the correct semantic parse for Give me the fare from Seattle to Boston, one needs to infer the existence of a flight entity, as in Give me the fare (of a flight) from Seattle to Boston. Implicit states offer candidates for addressing such needs. As in sinking, child nodes have access to either of the two simple states, but the implicit state is not visible to the parent node. 3.5 Lexical-Trigger Scores GUSP uses the database elements to automatically derive a simple scoring scheme for lexical triggers. If a database element has a name of k words, each word is assigned score 1/k for the corresponding node state. Similarly for property values and value node states. In a sentence, if a word w triggers a node state with score s, its dependency children and left and right neighbors all get a trigger score of 0.1·s for the same state. To score relevant words not appearing in the database (due to incompleteness of the database or lexical variations), GUSP uses DASH (Pantel et al., 2009) to provide additional word-pair scoring based on lexical distributional similarity computed over general text corpora (Wikipedia in this case). In the case of multiple score assignments for the same word, the maximum score is used. For multi-word values of property Y , and for a dependency edge connecting two collocated words, GUSP assigns a score 1.0 to the edge state joining the value node state V:Y to its composition state V:Y:C, as well as the edge state joining two composition states V:Y:C. GUSP also uses a domain-independent list of superlatives with the corresponding data types and polarity (e.g., first, last, earliest, latest, cheapest) and assigns a trigger score of 1.0 for each property of a compatible data type (e.g., cheapest for properties of type MONEY). 3.6 The GUSP Model In a nutshell, the GUSP model resembles a treeHMM, which models the emission of words and dependencies by node and edge states, as well as transition between an edge state and the parent and child node states. In preliminary experiments on the development set, we found that the na¨ıve model (with multinomials as conditional probabilities) did not perform well in EM. We thus chose to apply feature-rich EM (Berg-Kirkpatrick et al., 2010) in GUSP, which enabled the use of more generalizable features. Specifically, GUSP defines a probability distribution over dependency tree d and semantic parse z by Pθ(d, z) = 1 Z exp X i fi(d, z) · wi(d, z) where fi and wi are features and their weights, and Z is the normalization constant that sums over all possible d, z (over the same unlabeled tree). The features of GUSP are as follows: Lexical-trigger scores These are implemented as emission features with fixed weights. For example, given a token t that triggers node state N with score s, there is a corresponding features 1(lemma = t, state = N) with weight α·s, where α is a parameter. Emission features for node states GUSP uses two templates for emission of node states: for raised states, 1(token = ·), i.e., the emission weights for all raised states are tied; for non-raised states, 1(lemma = ·, state = N). Emission features for edge states GUSP uses the following templates for emission of edge states: Child node state is NULL, dependency= ·; Edge state is RAISING, dependency= ·; Parent node state is same as the child node state, dependency= ·; Otherwise, parent node state= ·, child node state= ·, edge state type= ·, dependency= ·. Transition features GUSP uses the following templates for transition features, which are similar to the edge emission features except for the dependency label: Child node state is NULL; Edge state is RAISING; Parent node state is same as the child node state; Otherwise, parent node state= ·, child node state= ·, edge state type= ·. Complexity Prior To favor simple semantic parses, GUSP imposes an exponential prior with weight β on nodes states that are not null or raised, and on each relational join in an edge state. 3.7 Learning and Inference Since the GUSP model factors over nodes and edges, learning and inference can be done efficiently using EM and dynamic programming. Specifically, the MAP parse and expectations can 938 be computed by tree-Viterbi and inside-outside (Petrov and Klein, 2008). The parameters can be estimated by feature-rich EM (Berg-Kirkpatrick et al., 2010). Because the Viterbi and inside-outside are applied to a fixed tree (i.e., the input dependency tree), their running times are only linear in the sentence length in GUSP. 3.8 Query Generation Given a semantic parse, GUSP generates the SQL by a depth-first traversal that recursively computes the denotation of a node from the denotations of its children and its node state and edge states. Each denotation is a structured query that contains: a list of entities for projection (corresponding to the FROM statement in SQL); a computation tree where the leaves are simple joins or value comparisons, and the internal nodes are logical or quantifier operators (the WHERE statement); the salient database elements (the SELECT statement). Below, we illustrate this procedure using the semantic parse in Figure 1 as a running example. Value node state GUSP creates a semantic object of the given type with a unique index and the word constant. For example, the denotation for node toronto is a city.name object with a unique index and constant “toronto”. The unique index is necessary in case the SQL involves multiple instances of the same entity. For example, the SQL in Figure 1 involves two instances of the entity city, corresponding to the departure and arrival cities, respectively. By default, such a semantic object will be translated into an equality constraint, such as city.name = toronto. Entity or property node state GUSP creates a semantic object of the given type with a unique relation index. For example, the denotation for node flight is simply a flight object with a unique index. By default, such an object will contribute to the list of entities in SQL projection (the FROM statement), but not any constraints. NULL state GUSP returns an empty denotation. Simple edge state GUSP appends the child denotation to that of the parent, and appends equality constraints corresponding to the relational join path. In the case of composition, such as the join between diego and san, GUSP simply keeps the parent object, while adding to it the words from the child. In the case of a more complex join, such as that between stopping and dtw, GUSP adds the relational constraints that join flight stop with airport: flight stop.stop airport = airport.airport id. Raising edge state GUSP simply takes the child denotation and sets that to the parent. Implicit and sinking states GUSP maintains two separate denotations for the two simple states in the complex state, and processes their respective edge states accordingly. For example, the node diego contains two denotations, one for V:city.name, and one for E:flight, with the corresponding child being san and stopping, respectively. Domain-independent states For comparator states such as MORE or LESS, GUSP changes the default equality constraints to an inequality one, such as flight.depart time < 600 for before 6am. For logical connectives, GUSP combines the projection and constraints accordingly. For quantifier states, GUSP applies the given function to the query. Resolve scoping ambiguities GUSP delays applying quantifiers until the child semantic object differs from the parent one or when reaching the root. GUSP employs the following fixed ordering in evaluating quantifiers and operators: superlatives and other quantifiers are evaluated at last (i.e., after evaluating all other joins or operators for the given object), whereas negation is evaluated first, conjunctions and disjunctions are evaluated in their order of appearance. 4 Experiments 4.1 Task We evaluated GUSP on the ATIS travel planning domain, which has been studied in He & Young (2005, 2006) and adapted for evaluating semantic parsing by Zettlemoyer & Collins (2007) (henceforth ZC07). The ZC07 dataset contains annotated logical forms for each sentence, which we do not use. Since our goal is not to produce a specific logical form, we directly evaluate on the end-to-end task of translating questions into database queries and measure question-answering accuracy. The ATIS distrbution contains the original SQL annotations, which we used to compute gold answers 939 for evaluation only. The dataset is split into training, development, and test, containing 4500, 478, and 449 sentences, respectively. We used the development set for initial development and tuning hyperparameters. At test time, we ran GUSP over the test set to learn a semantic parser and output the MAP parses.2 4.2 Preprocessing The ATIS sentences were originally derived from spoken dialog and were therefore in lower cases. Since case information is important for parsers and taggers, we first truecased the sentences using DASH (Pantel et al., 2009), which stores the case for each phrase in Wikipedia. We then ran the sentences through SPLAT, a state-of-the-art NLP toolkit (Quirk et al., 2012), to conduct tokenization, part-of-speech tagging, and constituency parsing. Since SPLAT does not output dependency trees, we ran the Stanford parser over SPLAT parses to generate the dependency trees in Stanford dependency (de Marneffe et al., 2006). 4.3 Systems For the GUSP system, we set the hyperparameters from initial experiments on the development set, and used them in all subsequent experiments. Specifically, we set α = 50 and β = −0.1, and ran three iterations of feature-rich EM with an L2 prior of 10 over the feature weights. To evaluate the importance of complex states, we considered two versions of GUSP : GUSPSIMPLE and GUSP-FULL, where GUSPSIMPLE only admits simple states, whereas GUSP-FULL admits all states. During development, we found that some questions are inherently ambiguous that cannot be solved except with some domain knowledge or labeled examples. In Section 3.2, we discuss an edge state that joins a flight with its starting city: flight-flight.from airport-airport-airport service-city. The ATIS database also contains another path of the same length: flight-flight.from airport-airport-ground service-city. The only difference is that air service is replaced by ground service. In some occasions, the 2This doesn’t lead to overfitting since we did not use any labeled information in the test set. Table 1: Comparison of semantic parsing accuracy on the ATIS test dataset. Both ZC07 and FUBL used annotated logical forms in training, whereas GUSP-FULL and GUSP++ did not. The numbers for GUSP-FULL and GUSP++ are endto-end question answering accuracy, whereas the numbers for ZC07 and FUBL are recall on exact match in logical forms. Accuracy ZC07 84.6 FUBL 82.8 GUSP-FULL 74.8 GUSP++ 83.5 answers are identical whereas in others they are different. Without other information, neither the complexity prior nor EM can properly discriminate one against another. (Note that this ambiguity is not present in the ZC07 logical forms, which use a single predicate from(f,c) for the entire relation paths. In other words, to translate ZC07 logical forms into SQL, one also needs to decide on which path to use.) Another type of domain-specific ambiguities involves sentences such as give me information on flights after 4pm on wednesday. There is no obvious information to disambiguate between flight.departure time and flight.arrival time for 4pm. Such ambiguities suggest opportunities for interactive learning,3 but this is clearly out of the scope of this paper. Instead, we incorporated a simple disambiguation feature with a small weight of 0.01 that fires over the simple states of flight.departure time and airport service. We named the resulting system GUSP++. To gauge the difficulty of the task and the quality of lexical-trigger scores, we also considered a deterministic baseline LEXICAL, which computed semantic parses using lexical-trigger scores alone. 3For example, after eliminating other much less likely alternatives, the system can present to the user with both choices and let the user to choose the correct one. The implicit feedback signal can then be used to train the system for future disambiguation. 940 Table 2: Comparison of question answering accuracy in ablation experiments. Accuracy LEXICAL 33.9 GUSP-SIMPLE 66.5 GUSP-FULL 74.8 GUSP++ 83.5 −RAISING 75.7 −SINKING 77.5 −IMPLICIT 76.2 4.4 Results We first compared the results of GUSP-FULL and GUSP++ with ZC07 and FUBL (Kwiatkowski et al., 2011).4 Note that ZC07 and FUBL were evaluated on exact match in logical forms. We used their recall numbers which are the percentages of sentences with fully correct logical forms. Given that the questions are quite specific and generally admit nonzero number of answers, the questionanswer accuracy should be quite comparable with these numbers. Table 1 shows the comparison. Surprisingly, even without the additional disambiguation feature, GUSP-FULL already attained an accuracy broadly in range with supervised results. With the feature, GUSP++ effectively tied with the best supervised approach. To evaluate the importance of various components in GUSP, we conducted ablation test to compare the variants of GUSP. Table 2 shows the results. LEXICAL can parse more than one third of the sentences correctly, which is quite remarkable in itself, considering that it only used the lexical scores. On the other hand, roughly two-third of the sentences cannot be correctly parsed in this way, suggesting that the lexical scores are noisy and ambiguous. In comparison, all GUSP variants achieved significant gains over LEXICAL. Additionally, GUSP-FULL substantially outperformed GUSP-SIMPLE, highlighting the challenges of syntax-semantics mismatch in ATIS, and demonstrating the importance and effectiveness of complex states for handling such mismatch. All three types of complex states produced significant contributions. For example, compared to GUSP++, 4We should note that while the more recent system of FUBL slightly trails ZC07, it is language-independent and can parse questions in multiple languages. removing RAISING dropped accuracy by almost 8 points. 4.5 Discussion Upon manual inspection, many of the remaining errors are due to syntactic parsing errors that are too severe to fix. This is partly due to the fact that ATIS sentences are out of domain compared to the newswired text on which the syntactic parsers were trained. For example, show, list were regularly parsed as nouns, whereas round (as in round trip) were often parsed as a verb and northwest were parsed as an auxilliary verb. Another reason is that ATIS sentences are typically less formal or grammatical, which exacerbates the difficulty in parsing. In this paper, we used the 1-best dependency tree to produce semantic parse. An interesting future direction is to consider joint syntacticsemantic parsing, using k-best trees or even the parse forest as input and reranking the top parse using semantic information.5 5 Conclusion This paper introduces grounded unsupervised semantic parsing, which leverages available database for indirect supervision and uses a grounded meaning representation to account for syntax-semantics mismatch in dependency-based semantic parsing. The resulting GUSP system is the first unsupervised approach to attain an accuracy comparable to the best supervised systems in translating complex natural-language questions to database queries. Directions for future work include: joint syntactic-semantic parsing, developing better features for learning; interactive learning in a dialog setting; generalizing distant supervision; application to knowledge extraction from database-rich domains such as biomedical sciences. Acknowledgments We would like to thank Kristina Toutanova, Chris Quirk, Luke Zettlemoyer, and Yoav Artzi for useful discussions, and Patrick Pantel and Michael Gammon for help with the datasets. 5Note that this is still different from the currently predominant approaches in semantic parsing, which learn to parse both syntax and semantics by training from the semantic parsing datasets alone, which are considerably smaller compared to resources available for syntactic parsing. 941 References Michele Banko, Michael J. Cafarella, Stephen Soderland, Matt Broadhead, and Oren Etzioni. 2007. Open information extraction from the web. In Proceedings of the Twentieth International Joint Conference on Artificial Intelligence, pages 2670–2676, Hyderabad, India. AAAI Press. Taylor Berg-Kirkpatrick, John DeNero, and Dan Klein. 2010. Painless unsupervised learning with features. In Proceedings of Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Benjamin B¨orschinger, Bevan K. Jones, and Mark Johnson. 2011. Reducing grounded learning tasks to grammatical inference. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. Xavier Carreras and Luis Marquez. 2004. Introduction to the CoNLL-2004 shared task: Semantic role labeling. In Proceedings of the Eighth Conference on Computational Natural Language Learning, pages 89–97, Boston, MA. ACL. David L. Chen and Raymond J. Mooney. 2008. Learning to sportscast: A test of grounded language acquisition. In ICML-08. James Clarke, Dan Goldwasser, Ming-Wei Chang, and Dan Roth. 2010. Driving semantic parsing from world’s response. In Proceedings of the 2010 Conference on Natural Language Learning. Marie-Catherine de Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In Proceedings of the Fifth International Conference on Language Resources and Evaluation, pages 449– 454, Genoa, Italy. ELRA. Pedro Domingos and Daniel Lowd. 2009. Markov Logic: An Interface Layer for Artificial Intelligence. Morgan & Claypool, San Rafael, CA. Dan Goldwasser, Roi Reichart, James Clarke, and Dan Roth. 2011. Confidence driven unsupervised semantic parsing. In Proceedings of the Forty Ninth Annual Meeting of the Association for Computational Linguistics. B.J. Grosz, D. Appelt, P. Martin, and F. Pereira. 1987. Team: An experiment in the design of transportable natural language interfaces. Artificial Intelligence, 32:173–243. Yulan He and Steve Young. 2005. Semantic processing using the hidden vector state model. In Computer Speech and Language. Yulan He and Steve Young. 2006. Spoken language understanding using the hidden vector state model. In Speech Communication Special Issue on Spoken Language understanding for Conversational Systems. Larry Heck, Dilek Hakkani-Tur, and Gokhan Tur. 2013. Leveraging knowledge graphs for web-scale unsupervised semantic parsing. In Proceedings of the Interspeech 2013. Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S. Weld. 2011. Knowledge-based weak supervision for information extraction of overlapping relations. In Proceedings of the Forty Ninth Annual Meeting of the Association for Computational Linguistics. R. J. Kate, Y. W. Wong, and R. J. Mooney. 2005. Learning to transform natural to formal languages. In Proceedings of the Twentieth National Conference on Artificial Intelligence. Joohyun Kim and Raymond J. Mooney. 2010. Generative alignment and semantic parsing for learning from ambiguous supervision. In COLING10. Jayant Krishnamurthy and Tom M. Mitchell. 2012. Weakly supervised training of semantic parsers. In EMNLP-12. Tom Kwiatkowski, Luke Zettlemoyer, Sharon Goldwater, and Mark Steedman. 2011. Lexical generalization in ccg grammar induction for semantic parsing. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. Percy Liang, Michael Jordan, and Dan Klein. 2011. Learning dependency-based compositional semantics. In Proceedings of the Forty Ninth Annual Meeting of the Association for Computational Linguistics. Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of the Forty Seventh Annual Meeting of the Association for Computational Linguistics. Raymond J. Mooney. 2007. Learning for semantic parsing. In Proceedings of the Eighth International Conference on Computational Linguistics and Intelligent Text Processing, pages 311–324, Mexico City, Mexico. Springer. Patrick Pantel, Eric Crestan, Arkady Borkovsky, AnaMaria Popescu, and Vishnu Vyas. 2009. Web-scale distributional similarity and entity set expansion. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing. Slav Petrov and Dan Klein. 2008. Discriminative loglinear grammars with latent variables. In NIPS-08. Hoifung Poon and Pedro Domingos. 2007. Joint inference in information extraction. In Proceedings of the Twenty Second National Conference on Artificial Intelligence, pages 913–918, Vancouver, Canada. AAAI Press. 942 Hoifung Poon and Pedro Domingos. 2009. Unsupervised semantic parsing. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 1–10, Singapore. ACL. Hoifung Poon and Pedro Domingos. 2010. Unsupervised ontological induction from text. In Proceedings of the Forty Eighth Annual Meeting of the Association for Computational Linguistics, pages 296– 305, Uppsala, Sweden. ACL. Ana-Maria Popescu, Oren Etzioni, and Henry Kautz. 2003. Towards a theory of natural language interfaces to databases. In IUI-03. Ana-Maria Popescu, Alex Armanasu, Oren Etzioni, David Ko, and Alexander Yates. 2004. Modern natural language interfaces to databases: Composing statistical parsing with semantic tractability. In COLING-04. Chris Quirk, Pallavi Choudhury, Jianfeng Gao, Hisami Suzuki, Kristina Toutanova, Michael Gamon, Wentau Yih, and Lucy Vanderwende. 2012. MSR SPLAT, a language analysis toolkit. In Proceedings of NAACL HLT 2012 Demonstration Session. Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions without labeled text. In Proceedings of the Sixteen European Conference on Machine Learning. Ivan Titov and Alexandre Klementiev. 2011. A bayesian model for unsupervised semantic parsing. In Proceedings of the Forty Ninth Annual Meeting of the Association for Computational Linguistics. Ivan Titov and Alexandre Klementiev. 2012. A bayesian approach to unsupervised semantic role induction. In Proceedings of the Conference of the European Chapter of the Association for Computational Linguistics. Luke S. Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammers. In Proceedings of the Twenty First Conference on Uncertainty in Artificial Intelligence, pages 658–666, Edinburgh, Scotland. AUAI Press. Luke S. Zettlemoyer and Michael Collins. 2007. Online learning of relaxed ccg grammars for parsing to logical form. In Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. 943
2013
92
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 944–953, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Automatic detection of deception in child-produced speech using syntactic complexity features Maria Yancheva Division of Engineering Science, University of Toronto Toronto Ontario Canada [email protected] Frank Rudzicz Toronto Rehabilitation Institute; and Department of Computer Science, University of Toronto Toronto Ontario Canada [email protected] Abstract It is important that the testimony of children be admissible in court, especially given allegations of abuse. Unfortunately, children can be misled by interrogators or might offer false information, with dire consequences. In this work, we evaluate various parameterizations of five classifiers (including support vector machines, neural networks, and random forests) in deciphering truth from lies given transcripts of interviews with 198 victims of abuse between the ages of 4 and 7. These evaluations are performed using a novel set of syntactic features, including measures of complexity. Our results show that sentence length, the mean number of clauses per utterance, and the StajnerMitkov measure of complexity are highly informative syntactic features, that classification accuracy varies greatly by the age of the speaker, and that accuracy up to 91.7% can be achieved by support vector machines given a sufficient amount of data. 1 Introduction The challenge of disambiguating between truth and deception is critical in determining the admissibility of court testimony. Unfortunately, the testimony of maltreated children is often not admitted in court due to concerns about truthfulness since children can be instructed to deny transgressions or misled to elicit false accusations (Lyon and Dorado, 2008). However, the child is often the only witness of the transgression (Undeutsch, 2008); automatically determining truthfulness in such situations is therefore a paramount goal so that justice may be served effectively. 2 Related Work Research in the detection of deception in adult speech has included analyses of verbal and nonverbal cues such as behavioral changes, facial expression, speech dysfluencies, and cognitive complexity (DePaulo et al., 2003). Despite statistically significant predictors of deception such as shorter talking time, fewer semantic details, and less coherent statements, DePaulo et al. (2003) found that the median effect size is very small. Deception without special motivation (e.g., everyday ‘white lies’) exhibited almost no discernible cues of deception. However, analysis of moderating factors showed that cues were significantly more numerous and salient when lies were about transgressions. Literature on deception in children is relatively limited. In one study, Lewis et al. (1989) studied 3-year-olds and measured behavioral cues, such as facial expression and nervous body movement, before and after the elicitation of a lie. Verbal responses consisted of yes/no answers. Results suggested that 3-year-old children are capable of deception, and that non-verbal behaviors during deception include increases in ‘positive’ behaviors (e.g., smiling). However, verbal cues of deception were not analyzed. Crucially, Lewis et al. (1989) showed that humans are no more accurate in deciphering truth from deception in child speech than in adult speech, being only about 50% accurate. More recently, researchers have used linguistic features to identify deception. Newman et al. (2003) inferred deception in transcribed, typed, and handwritten text by identifying features of linguistic style such as the use of personal pronouns 944 and exclusive words (e.g., but, except, without). These features were obtained with the Linguistic Inquiry and Word Count (LIWC) tool and used in a logistic regression classifier which achieved, on average, 61% accuracy on test data. Feature analysis showed that deceptive stories were characterized by fewer self-references, more negative emotion words, and lower cognitive complexity, compared to non-deceptive language. Another recent stylometric experiment in automatic identification of deception was performed by Mihalcea and Strapparava (2009). The authors used a dataset of truthful and deceptive typed responses produced by adult subjects on three different topics, collected through the Amazon Mechanical Turk service. Two classifiers, Na¨ıve Bayes (NB) and a support vector machine (SVM), were applied on the tokenized and stemmed statements to obtain best classification accuracies of 70% (abortion topic, NB), 67.4% (death penalty topic, NB), and 77% (friend description, SVM), where the baseline was taken to be 50%. The large variability of classifier performance based on the topic of deception suggests that performance is context-dependent. The authors note this as well by demonstrating significantly lower results of 59.8% for NB and 57.8% for SVM when crosstopic classification is performed by training each classifier on two topics and testing on the third. The Mihalcea-Strapparava mturk dataset was further used in a study by Feng et al. (2012) which employs lexicalized and unlexicalized production rules to obtain deep syntactic features. The crossvalidation accuracy obtained on the three topics was improved to 77% (abortion topic), 71.5% (death penalty topic), and 85% (friend description). The results nevertheless varied with topic. Another experiment using syntactic features for identifying sentences containing uncertain or unreliable information was conducted by Zheng et al. (2010) on an adult-produced dataset of abstracts and full articles from BioScope, and on paragraphs from Wikipedia. The results demonstrated that using syntactic dependency features extracted with the Stanford parser improved performance on the biological dataset, while an ensemble classifier combining a conditional random field (CRF) and a MaxEnt classifier performed better than individual classifiers on the Wikipedia dataset. A meta-analysis of features used in deception detection was performed by Hauch et al. (2012) and revealed that verbal cues based on lexical categories extracted using the LIWC tool show statistically significant, though small, differences between truth- and lie-tellers. Vartapetiance and Gillam (2012) surveyed existing cues to verbal deception and demonstrated that features in LIWC are not indicative of deception in online content, recommending that the features used to identify deception and the thresholds between deception and truth be based on the specific data set. In the speech community, analysis of deceptive speech has combined various acoustic, prosodic, and lexical features (Hirschberg et al., 2005). Graciarena et al. (2006) combined two independent systems — an acoustic Gaussian mixture model based on Mel cepstral features, and a prosodic support vector machine based on features such as pitch, energy, and duration — and achieved an accuracy of 64.4% on a test subset of the ColumbiaSRI-Colorado (CSC) corpus of deceptive and nondeceptive speech (Hirschberg et al., 2005). While previous studies have achieved some promising results in detecting deception with lexical, acoustic, and prosodic features, syntax remains relatively unexplored compared to LIWCbased features. Syntactic complexity as a cue to deception is consistent with literature in social psychology which suggests that emotion suppression (e.g., inhibition of guilt and fear) consumes cognitive resources, which can influence the underlying complexity of utterances (Richards and Gross, 1999; Richards and Gross, 2000). Additionally, the use of syntactic features is motivated by their successful use on adult-produced datasets for detecting deceptive or uncertain utterances (Feng et al., 2012; Zheng et al., 2010), as well as in other applications, such as the evaluation of changes in text complexity (Stajner and Mitkov, 2012), the identification of personality in conversation and text (Mairesse et al., 2007), and the detection of dementia through syntactic changes in writing (Le et al., 2011). Past work has focused on identifying deceptive speech produced by adults. The problem of determining validity of child testimony in high-stakes child abuse court cases motivates the analysis of child-produced deceptive language. Further, the use of binary classification schemes in previous work does not account for partial truths often encountered in real-life scenarios. Due to the rarity of real deceptive data, studies typically use arti945 ficially produced deceptive language which falls unambiguously in one of two classes: complete truth or complete deception (Newman et al., 2003; Mihalcea and Strapparava, 2009). Studies which make use of real high-stakes courtroom data containing partial truths, such as the Italian DECOUR corpus analyzed by Fornaciari and Poesio (2012), preprocess the dataset to eliminate any partially truthful utterances. Since utterances of this kind are common in real language, their elimination from the dataset is not ideal. The present study evaluates the viability of a novel set of 17 syntactic features as markers of deception in five classifiers. Moreover, to our knowledge, it is the first application of automatic deception detection to a real-life dataset of deceptive speech produced by maltreated children. The data is scored using a gradient of truthfulness, which is used to represent completely true, partially true, and completely false statements. Descriptions of the data (section 3) and feature sets (section 4) precede experimental results (section 5) and the concluding discussion (section 6). 3 Data The data used in this study were obtained from Lyon et al. (2008), who conducted and transcribed a truth-induction experiment involving maltreated children awaiting court appearances in the Los Angeles County Dependency Court. Subjects were children between the ages of 4 and 7 (99 boys and 99 girls) who were interviewed regarding an unambiguous minor transgression involving playing with a toy. To ensure an understanding of lying and its negative consequences, all children passed a preliminary oath-taking competency task, requiring each child to correctly identify a truth-teller and a lie-teller in an object labeling task, as well as to identify which of the two would be the target of negative consequences. During data collection, a confederate first engaged each child individually in one of four conditions: a) play, b) play and coach, c) no play, and d) no play and coach. In the two play conditions, the confederate engaged the child in play with a toy house (in the no play conditions, they did not); in the two coach conditions, the confederate coached the child to lie (i.e., to deny playing if they played with the toy house, or to admit playing if they did not). The confederate then left and the child was interviewed by a second researcher who performed a truth-induction manipulation consisting of one of: a) control — no manipulation, b) oath — the interviewer reminded the child of the importance of telling the truth and elicited a promise of truth-telling, and c) reassurance — the interviewer reassured the child that telling the truth will not lead to any negative consequences. Each pre- and post-induction transcription may contain explicit statements of up to seven features: looking at toy-house, touching toy-house, playing with toy-house, opening toy-house doors or windows to uncover hidden toys, playing with these hidden toys, spinning the toy-house, and putting back or hiding a toy. All children in the play condition engaged in all seven actions, while children in the no play condition engaged in none. An eighth feature is the lack of explicit denial of touching or playing with the toy house, which is considered to be truthful in the play condition, and deceptive in the no play condition (see the examples in the appendix). A transcription is labeled as truth if at least half of these features are truthful (53.2% of all transcriptions) and lie otherwise (46.8% of transcriptions). Other thresholds for this binary discrimination are explored in section 5.4. Each child’s verbal response was recorded twice: at time T1 (prior to truth-induction), and at time T2 (after truth-induction). Each child was subject to one of the four confederate conditions and one of the three induction conditions. The raw data were pre-processed to remove subjects with blank transcriptions, resulting in a total of 173 subjects (87 boys and 86 girls) and 346 transcriptions. 4 Methods Since the data consist of speech produced by 4- to 7-year-old children, the predictive features must depend on the level of syntactic competence of this age group. The “continuity assumption” states that children have a complete system of abstract syntactic representation and have the same set of abstract functional categories accessible to adults (Pinker, 1984). An experimental study with 3to 8-year-old children showed that their syntactic competence is comparable to that of adults; specifically, children have a productive rule for passive forms which allows them to generalize to previously unheard predicates while following adult-like constraints to avoid over-generalization (Pinker et al., 1987). Recent experiments with syntactic priming showed that children’s representations of abstract passive constructions are welldeveloped as early as age 3 or 4, and young 946 children are generally able to form passive constructions with both action and non-action verbs (Thatcher et al., 2007). These results suggest that measures of syntactic complexity that are typically used to evaluate adult language could be adapted to child speech, provided that the children are at least 3 or 4 years old. Here, the complexity of speech is characterized by the length of utterances and by the frequency of dependent and coordinate clauses, with more complex speech consisting of longer utterances and a higher number of subordinate clauses. We segmented the transcriptions into sentences, clauses and T-units, which are “minimally terminable units” consisting of a main clause and its dependent clauses (Hunt, 1965; O’Donnell et al., 1967)1. Deceptive communication generally has shorter duration and is less detailed than nondeceptive speech (DePaulo et al., 2003), so the length of each type of segment was counted along with frequency features over segments. Here, the frequency of dependent and coordinate clauses per constituent approximate clause-based measures of complexity. Our approach combines a set of features obtained from a functional dependency grammar (FDG) parser with another (non-overlapping) set of features obtained from a phrase-based grammar parser. We obtained FDG parses of the transcriptions using Connexor’s Machinese Syntax parser (Tapanainen and J¨arvinen, 1997) and extracted the following 5 features: ARI Automated readability index. Measures word and sentence difficulty, 4.71 c w +0.5w s − 21.43, where c is the number of characters, w is the number of words, and s is the number of sentences (Smith and Senter, 1967). ASL Average sentence length. The number of words over the number of sentences. COM Sentence complexity. The ratio of sentences with ≥2 finite predicators to those with ≤ 1 finite predicator (Stajner and Mitkov, 2012). PAS Passivity. The ratio of non-finite main predicators in a passive construction (@– 1T-units include single clauses, two or more phrases in apposition, or clause fragments. Generally, coordinate clauses are split into separate T-units, as are clauses interrupted by discourse boundary markers. FMAINV %VP) to the total number of finite (@+FMAINV %VA) and non-finite (@– FMAINV %VA and @–FMAINV %VP) main predicators, including active constructions. MCU Mean number of clauses per utterance. Additionally, we searched for specific syntactic patterns in phrase-based parses of the data. We used the Stanford probabilistic natural language parser (Klein and Manning, 2003) for constructing these parse trees, the Stanford Tregex utility (Levy and Andrew, 2006) for searching the constructed parse trees, and a tool provided by Lu (2011) which extracts a set of 14 clause-based features in relation to sentence, clause and T-unit constituents. 4.1 Feature analysis Analysis of variance (ANOVA) was performed on the set of 17 features, shown in Table 1. A onefactor ANOVA across the truth and lie groups showed three significant feature variations: average sentence length (ASL), sentence complexity (COM), and mean clauses per utterance (MCU). Dependencies between some feature pairs that are positively correlated are shown in Figure 1. As expected, the number of clauses (MCU) is dependent on sentence length (ASL) (r(344) = .92, p < .001). Also, the number of T-units is dependent on the number of clauses: CN/C is correlated with CN/T (r(344) = .89, p < .001), CP/C is correlated with CP/T (r(344) = .85, p < .001), and DC/C is correlated with DC/T (r(344) = .92, p < .001). Other features are completely uncorrelated. For example, the number of passive constructions is independent of sentence length (r(344) = −.0020, p > .05), the number of complex nominals per clause is independent of clause length (r(344) = .076, p > .05), and the density of dependent clauses is independent of the density of coordinate phrases (r(344) = −.027, p > .05). 5 Results We evaluate five classifiers: logistic regression (LR), a multilayer perceptron (MLP), na¨ıve Bayes (NB), a random forest (RF), and a support vector machine (SVM). Here, na¨ıve Bayes, which assumes conditional independence of the features, and logistic regression, which has a linear decision boundary, are baselines. The MLP includes a variable number of layers of hidden units, which 947 Figure 1: Independent and dependent feature pairs; data points are labeled as truth (blue) and lie (green). Feature F1,344 d Automated Readability Index (ARI) 0.187 0.047 Average Sentence Length (ASL) 3.870 0.213 Sentence Complexity (COM) 10.93 0.357 Passive Sentences (PAS) 1.468 0.131 Mean Clauses per Utterance (MCU) 6.703 0.280 Mean Length of T-Unit (MLT) 2.286 0.163 Mean Length of Clause (MLC) 0.044 -0.023 Verb Phrases per T-Unit (VP/T) 3.391 0.199 Clauses per T-Unit (C/T) 2.345 0.166 Dependent Clauses per Clause (DC/C) 1.207 0.119 Dependent Clauses per T-Unit (DC/T) 1.221 0.119 T-Units per Sentence (T/S) 3.692 0.208 Complex T-Unit Ratio (CT/T) 2.103 0.157 Coordinate Phrases per T-Unit (CP/T) 0.463 -0.074 Coordinate Phrases per Clause (CP/C) 0.618 -0.085 Complex Nominals per T-Unit (CN/T) 0.722 0.092 Complex Nominals per Clause (CN/C) 0.087 0.032 Table 1: One-factor ANOVA (F statistics and Cohen’s d-values, α = 0.05) on all features across truth and lie groups. Statistically significant results are in bold. apply non-linear activation functions on a linear combination of inputs. The SVM is a parametric binary classifier that provides highly non-linear decision boundaries given particular kernels. The random forest is an ensemble classifier that returns the mode of the class predictions of several decision trees. 5.1 Binary classification across all data The five classifiers were evaluated on the entire pooled data set with 10-fold cross validation. Table 2 lists the parameters varied for each classifier, and Table 3 shows the cross-validation accuracy for the classifiers with the best parameter settings. The na¨ıve Bayes classifier performs poorly, as could be expected given the assumption of conditional feature independence. The SVM classifier performs best, with 59.5% cross-validation accuracy, which is a statistically significant improvement over the baselines of LR (t(4) = 22.25, p < .0001), and NB (t(4) = 16.19, p < .0001). Parameter Values LR R Ridge value 10−10 to 10−2 MLP L Learning rate 0.0003 to 0.3 M Momentum 0 to 0.5 H Number of hidden layers 1 to 5 NB K Use kernel estimator true, false RF I Number of trees 1 to 20 K Maximum depth unlimited, 1 to 10 SVM K Kernel Linear, RBF, Polynomial E Polynomial Exponent 2 to 5 G RBF Gamma 0.001 to 0.1 C Complexity constant 0.1 to 10 Table 2: Empirical parameter settings for each classifier 5.2 Binary classification by age group Significant variation in syntactic complexity is expected across ages. To account for such variation, we segmented the dataset in four groups: 44 tran948 Accuracy Parameters LR 0.5347 R = 10−10 MLP 0.5838 L = 0.003, M = 0.4 NB 0.5173 K = false RF 0.5809 I = 10, K = 6 SVM 0.5954 Polynomial, E = 3, C = 1 Table 3: Cross-validation accuracy of binary classification performed on entire dataset of 346 transcriptions. scriptions of 4-year-olds, 120 of 5-year-olds, 94 of 6-year-olds, and 88 of 7-year-olds. By comparison, Vrij et al. (2004) used data from only 35 children in their study of 5- and 6-year-olds. Classification of truthfulness was performed separately for each age, as shown in Table 4. In comparison with classification accuracy on pooled data, a paired t-test shows statistically significant improvement across all age groups using RF, t(3) = 10.37, p < .005. Age (years) 4 5 6 7 LR 0.6136 0.5333 0.5957* 0.4886 MLP 0.6136† 0.5583 0.6170† 0.5909* NB 0.6136* 0.5250 0.5426 0.5682 RF 0.6364† 0.6333* 0.6383† 0.6591† SVM 0.6591 0.5583 0.6064 0.6250* Table 4: Cross-validation accuracy of binary classification partitioned by age. The best classifier at each age is shown in bold. The classifiers showing statistically significant incremental improvement are marked: *p < .05, †p < .001 (paired t-test, d.f. 4) 5.3 Binary classification by age group, on verbose transcriptions The length of speech, in number of words, varies widely (min = 1, max = 167, µ = 36.83, σ = 28.34) as a result of the unregulated nature of the interview interaction. To test the effect of verbosity, we segment the data by child age and select only the transcriptions with above-average word counts (i.e., ≥37 words), resulting in four groups: 12 transcriptions of 4-year-olds, 48 of 5year-olds, 39 of 6-year-olds, and 37 of 7-year-olds. This mimics the scenario in which some minimum threshold is placed on the length of a child’s speech. In this verbose case, 63.3% of transcripts are labeled truth across age groups (using the same definition of truth as in section 3), with no substantial variation between ages; in the non-verbose case, 53.2% are marked truth. Fisher’s exact test on this contingency table reveals no significant difference between these distributions (p = 0.50). Classification results are shown in Table 5. The size of the training set for the youngest age category is low compared to the other age groups, which may reduce the reliability of the higher accuracy achieved in that group. The other three age groups show a growing trend, which is consistent with expectations — older children exhibit greater syntactic complexity in speech, allowing greater variability of feature values across truth and deception. Here, both SVM and RF achieve 83.8% cross-validation accuracy in identifying deception in the speech of 7-year-old subjects. 4 5 6 7 LR 0.7500† 0.5417 0.6667† 0.7297† MLP 0.8333† 0.6250† 0.6154 0.7838† NB 0.6667† 0.4583 0.4103 0.7297* RF 0.8333† 0.5625 0.7179† 0.8378† SVM 0.9167* 0.6250† 0.6154* 0.8378† Table 5: Cross-validation accuracy of binary classification performed on transcriptions with above average word count (136 transcriptions), by age group. Rows represent classifiers, columns represent ages. The best classifier for each age is in bold. The classifiers showing statistically significant incremental improvement are marked: *p < .05, †p < .001 (paired t-test, d.f. 4) 5.4 Threshold variation To study the effect of the threshold between the truth and lie classes, we vary the value of the threshold, τ, from 1 to 8, requiring the admission of at least τ truthful details (out of 8 possible details) in order to label a transcription as truth. The effect of τ on classification accuracy over the entire pooled dataset for each of the 5 classifiers is shown in Figure 2. A one-factor ANOVA with τ as the independent variable with 8 levels, and cross-validation accuracy as the dependent variable, confirms that the effect of the threshold is statistically significant (F7,40 = 220.69, p < .0001) with τ = 4 being the most conservative setting. 949 Figure 2: Effect of threshold and classifier choice on cross-validation accuracy. Threshold τ = 0 is not present, since all data would be labeled truth. 5.5 Linguistic Inquiry and Word Count The Linguistic Inquiry and Word Count (LIWC) tool for generating features based on word category frequencies has been used in deception detection with adults, specifically: first-person singular pronouns (FP), exclusive words (EW), negative emotion words (NW), and motion verbs (MV) (Newman et al., 2003). We compare the performance of classifiers trained with our 17 syntactic features to those of classifiers trained with those LIWC-based features on the same data. To evaluate the four LIWC categories, we use the 86 words of the Pennebaker model (Little and Skillicorn, 2008; Vartapetiance and Gillam, 2012). The performance of the classifiers trained with LIWC features is shown in Table 6. The set of 17 syntactic features proposed here result in significantly higher accuracies across classifiers and experiments (µ = 0.63, σ = 0.10) than with the LIWC features used in previous work (µ = 0.58, σ = 0.09), as shown in Figure 3 (t(53) = −0.0691, p < .0001). 6 Discussion and future work This paper evaluates automatic estimation of truthfulness in the utterances of children using a novel set of lexical-syntactic features across five types of classifiers. While previous studies have favored word category frequencies extracted with LIWC (Newman et al., 2003; Little and Skillicorn, 2008; Hauch et al., 2012; Vartapetiance and Gillam, Figure 3: Effect of feature set choice on crossvalidation accuracy. 2012; Almela et al., 2012; Fornaciari and Poesio, 2012), our results suggest that the set of syntactic features presented here perform significantly better than the LIWC feature set on our data, and across seven out of the eight experiments based on age groups and verbosity of transcriptions. Statistical analyses showed that the average sentence length (ASL), the Stajner-Mitkov measure of sentence complexity (COM), and the mean number of clauses per utterance (MCU) are the features most predictive of truth and deception (see section 4.1). Further preliminary experiments are exploring two methods of feature selection, namely forward selection and minimumRedundancy-Maximum-Relevance (mRMR). In forward selection, features are greedily added oneat-a-time (given an initially empty feature set) until the cross-validation error stops decreasing with the addition of new features (Deng, 1998). This results in a set of only two features: sentence complexity (COM) and T-units per sentence (T/S). Features are selected in mRMR by minimizing redundancy (i.e., the average mutual information between features) and maximizing the relevance (i.e., the mutual information between the given features and the class) (Peng et al., 2005). This approach selects five features: verb phrases per Tunit (VP/T), passive sentences (PAS), coordinate phrases per clause (CP/C), sentence complexity (COM), and complex nominals per clause (CN/C). These results confirm the predictive strength of sentence complexity. Further, preliminary classi950 Group Accuracy Best Classifier Parameters Entire dataset 0.5578 RF I = 20, K = unlimited 4-yr-olds 0.5682 MLP L = 0.005, M = 0.3, H = 1 5-yr-olds 0.5583 RF I = 5, K = unlimited 6-yr-olds 0.5319 MLP L = 0.005, M = 0.3, H = 1 7-yr-olds 0.6591 RF I = 5, K = unlimited 4-yr-olds, verbose 0.8333 SVM PolyKernel, E = 4, C = 10 5-yr-olds, verbose 0.7083 SVM NormalizedPolyKernel, E = 1, C = 10 6-yr-olds, verbose 0.6154 MLP L = 0.09, M = 0.2, H = 1 7-yr-olds, verbose 0.7027 MLP L = 0.01, M = 0.5, H = 3 Table 6: Best 10-fold cross-validation accuracies achieved on various subsets of the data, using the LIWC-based feature set. fication results across all classifiers suggest that accuracies are significantly higher given forward selection (µ = 0.58, σ = 0.02) relative to the original feature set (µ = 0.56, σ = 0.03); t(5) = −2.28, p < .05 while the results given the mRMR features are not significantly different. Generalized cross-validation accuracy increases significantly given partitioned age groups, which suggests that the importance of features may be moderated by age. A further incremental increase is achieved by considering only transcriptions above a minimum length. O’Donnell et al. (1967) examined syntactic complexity in the speech and writing of children aged 8 to 12, and found that speech complexity increases with age. This phenomenon appears to be manifested in the current study by the extent to which classification increases generally across the 5-, 6-, and 7-yearold groups, as shown in Table 5. Future examination of the effect of age on feature saliency may yield more appropriate age-dependent features. While past research has used logistic regression as a binary classifier (Newman et al., 2003), our experiments show that the best-performing classifiers allow for highly non-linear class boundaries; SVM and RF models achieve between 62.5% and 91.7% accuracy across age groups — a significant improvement over the baselines of LR and NB, as well as over previous results. Moreover, since the performance of human judges in identifying deception is not significantly better than chance (Lewis et al., 1989; Newman et al., 2003), these results show promise in the use of automatic detection methods. Partially truthful transcriptions were scored using a gradient of 0 to 8 truthful details, and a threshold τ was used to perform binary classification. Extreme values of τ lead to poor F-scores despite high accuracy, since the class distribution of transcriptions is very skewed towards either class. Future work can explore the effect of threshold variation given sufficient data with even class distributions for each threshold setting. When such data is unavailable, experiments can make use of the most conservative setting (τ = 4, or an equivalent mid-way setting) for analysis of real-life utterances containing partial truths. Future work should consider measures of confidence for each classification, where possible, so that more ambiguous classifications are not treated on-par with more certain ones. For instance, confidence can be approximated in MLPs by the entropy across continuous-valued output nodes, and in RFs by the number of component decision trees that agree on a classification. Although acoustic data were not provided with this data set (Lyon and Dorado, 2008) (and, in practice, cannot be assured), future work should also examine the differences in the acoustics of children across truth conditions. Acknowledgments The authors thank Kang Lee (Ontario Institute for Studies in Education, University of Toronto) and Angela Evans (Brock University) for sharing both this data set and their insight. 951 Appendix The following is an example of evasive deceptive speech from a 6-year-old after no truth induction (i.e., the control condition in which the interviewer merely states that he needs to ask more questions): ... Yeah yeah ok, I’m a tell you. We played that same game and I won and he won. I’m going to be in trouble if I tell you. It a secret. It’s a secret ’cuz we’re friends. ... Transcription excerpt labeled as truth by a threshold of τ = 1: 7-year-old child’s response (play, no coach condition), in which the child does not explicitly deny playing with the toy house, and admits to looking at it but does not confess to any of the other six actions: ...I was playing, I was hiding the coin and I was trying to find the house... trying to see who was in there... Transcription excerpt labeled as truth by a threshold of τ = 4: 7-year-old child’s response (play, no coach condition), in which the child does not explicitly deny playing, and admits to three actions: ...me and him was playing with it... we were just spinning it around and got the toys out... References Angela Almela, Rafael Valencia-Garcia, and Pascual Cantos. 2012. Seeing through deception: A computational approach to deceit detection in written communication. Proceedings of the EACL 2012 Workshop on Computational Approaches to Deception Detection, April 23-27, 2012, Avignon, France, 1522. Ethem Alpaydin. 2010. Introduction to Machine Learning. Cambridge, MA: MIT Press. Kan Deng. 1998. OMEGA: On-line memory-based general purpose system classifier. Doctoral thesis, School of Computer Science, Carnegie Mellon University Bella M. DePaulo, James J. Lindsay, Brian E. Malone, Laura Muhlenbruck, Kelly Charlton, and Harris Cooper. 2003. Cues to deception. Psychological Bulletin, 129(1):74-118. Song Feng, Ritwik Banerjee and Yejin Choi. 2012. Syntactic stylometry for deception detection. Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, July 8-14, 2012, Jeju, Republic of Korea, 171-175. Tommaso Fornaciari and Massimo Poesio. 2012. On the use of homogeneous sets of subjects in deceptive language analysis. Proceedings of the EACL 2012 Workshop on Computational Approaches to Deception Detection, April 23-27, 2012, Avignon, France, 39-47. Martin Graciarena, Elizabeth Shriberg, Andreas Stolcke, Frank Enos, Julia Hirschberg, Sachin Kajarekar. 2006. Combining prosodic lexical and cepstral systems for deceptive speech detection. Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pages I1033-I-1036. Valerie Hauch, Jaume Masip, Iris Blandon-Gitlin, and Siegfried L. Sporer. 2012. Linguistic cues to deception assessed by computer programs: A metaanalysis. Proceedings of the EACL Workshop on Computational Approaches to Deception Detection, pages 1-4. Julia Hirschberg, Stefan Benus, Jason M. Brenier, Frank Enos, Sarah Friedman, Sarah Gilman, Cynthia Girand, Martin Graciarena, Andreas Kathol, Laura Michaelis, Bryan Pellom, Elizabeth Shriberg, and Andreas Stolcke. 2005. Distinguishing deceptive from non-deceptive speech. Proceedings of Eurospeech 2005, pages 1833-1836. Kellogg W. Hunt. 1965. Grammatical structures written at three grade levels. NCTE Research Report No. 3. Dan Klein and Christopher Manning. 2003. Accurate unlexicalized parsing. Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 423-430. Xuan Le, Ian Lancashire, Graeme Hirst, and Regina Jokel. 2011. Longitudinal detection of dementia through lexical and syntactic changes in writing: a case study of three British novelists. Literary and Linguistic Computing, 26(4):435-461. Roger Levy and Galen Andrew. 2006. Tregex and Tsurgeon: tools for querying and manipulating tree data structures. 5th International Conference on Language Resources and Evaluation. Michael Lewis, Catherine Stanger, and Margaret W. Sullivan. 1989. Deception in 3-year-olds. Developmental Psychology, 25(3):439-443. A. Little and D. B. Skillicorn. 2008. Detecting deception in testimony. Proceedings of IEEE International Conference of Intelligence and Security Informatics (ISI 2008), June 17-20, 2008, Taipei, Taiwan, 13-18. 952 Xiaofei Lu. 2011. Automatic analysis of syntactic complexity in second language writing. International Journal of Corpus Linguistics, 15(4):474-496. Thomas D. Lyon and J. S. Dorado. 2008. Truth induction in young maltreated children: the effects of oath-taking and reassurance on true and false disclosures. Child Abuse & Neglect, 32(7):738-748. Thomas D. Lyon, Lindsay C. Malloy, Jodi A. Quas, and Victoria A. Talwar. 2008. Coaching, truth induction, and young maltreated children’s false allegations and false denials. Child Development, 79(4):914-929. Franc¸ois Mairesse, Marilyn A. Walker, Matthias R. Mehl, and Roger K. Moore. 2007. Using linguistic cues for the automatic recognition of personality in conversation and text. Journal of Artificial Intelligence Research, 30:457-500. Rada Mihalcea and Carlo Strapparava. 2009. The lie detector: explorations in the automatic recognition of deceptive language. Proceedings of the ACLIJCNLP 2009 Conference Short Papers, August 4, 2009, Suntec, Singapore, 309-312. Matthew L. Newman, James W. Pennebaker, Diane S. Berry, and Jane M. Richards. 2003. Lying words: predicting deception from linguistic styles. PSPB, 29(5):665-675. Roy C. O’Donnell, William J. Griffin, and Raymond C. Norris. 1967. A transformational analysis of oral and written grammatical structures in the language of children in grades three, five, and seven. PSPB, 29(5):665-675. Steven Pinker. 1984. Language learnability and language development, Cambridge, MA: Harvard University Press. Steven Pinker, David S. Lebeaux, and Loren Ann Frost. 1987. Productivity and constraints in the acquisition of the passive. Cognition, 26:195-267. Hanchuan Peng, Fuhui Long, and Chris Ding. 2005. Feature selection based on mutual information: criteria of max-dependency, max-relevance, and minredundancy. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(8):1226-1238. J. M. Richards and J. J. Gross. 1999. Composure at any cost? The cognitive consequences of emotion suppression. PSPB, 25(8):1033-1044. J. M. Richards and J. J. Gross. 2000. Emotion regulation and memory: the cognitive costs of keeping one’s cool. Journal of Personality and Social Psychology, 79:410-424. E. A. Smith and R. J. Senter. 1967. Automated readability index. Technical report, Defense Technical Information Center. United States. Sanja Stajner and Ruslan Mitkov. 2012. Diachronic changes in text complexity in 20th century English language: an NLP Approach. Proceedings of the International Conference on Language Resources and Evaluation (LREC), pages 1577-1584. Pasi Tapanainen and Timo J¨arvinen. 1997. A nonprojective dependency parser. Proceedings of the 5th Conference on Applied Natural Language Processing, pages 64-71. Katherine Thatcher, Holly Branigan, Janet McLean, and Antonella Sorace. 2007. Children’s early acquisition of the passive: evidence from syntactic priming. Child Language Seminar, University of Reading. Udo Undeutsch. 2008. Courtroom evaluation of eyewitness testimony. Applied Psychology, 33(1):5166. Anna Vartapetiance and Lee Gillam. 2012. “I don’t know where he is not”: does deception research yet offer a basis for deception detectives? Proceedings of the EACL 2012 Workshop on Computational Approaches to Deception Detection, April 23-27, 2012, Avignon, France, 5-14. Aldert Vrij, Lucy Akehurst, Stavroula Soukara, and Ray Bull. 2004. Detecting deceit via analyses of verbal and nonverbal behavior in children and adults. Human Communication Research, 30(1):8– 41 Yi Zheng, Qifeng Dai, Qiming Luo, and Enhong Chen. 2010. Hedge classification with syntactic dependency features based on an ensemble classifier. Proceedings of the Fourteenth Conference on Computational Natural Language Learning, July 15-16, 2010, Uppsala, Sweden, 151-156. 953
2013
93
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 954–963, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Sentiment Relevance Christian Scheible Institute for Natural Language Processing University of Stuttgart, Germany [email protected] Hinrich Sch¨utze Center for Information and Language Processing University of Munich, Germany Abstract A number of different notions, including subjectivity, have been proposed for distinguishing parts of documents that convey sentiment from those that do not. We propose a new concept, sentiment relevance, to make this distinction and argue that it better reflects the requirements of sentiment analysis systems. We demonstrate experimentally that sentiment relevance and subjectivity are related, but different. Since no large amount of labeled training data for our new notion of sentiment relevance is available, we investigate two semi-supervised methods for creating sentiment relevance classifiers: a distant supervision approach that leverages structured information about the domain of the reviews; and transfer learning on feature representations based on lexical taxonomies that enables knowledge transfer. We show that both methods learn sentiment relevance classifiers that perform well. 1 Introduction It is generally recognized in sentiment analysis that only a subset of the content of a document contributes to the sentiment it conveys. For this reason, some authors distinguish the categories subjective and objective (Wilson and Wiebe, 2003). Subjective statements refer to the internal state of mind of a person, which cannot be observed. In contrast, objective statements can be verified by observing and checking reality. Some sentiment analysis systems filter out objective language and predict sentiment based on subjective language only because objective statements do not directly reveal sentiment. Even though the categories subjective/objective are well-established in philosophy, we argue that they are not optimal for sentiment analysis. We instead introduce the notion of sentiment relevance (S-relevance or SR for short). A sentence or linguistic expression is S-relevant if it contains information about the sentiment the document conveys; it is S-nonrelevant (SNR) otherwise. Ideally, we would like to have at our disposal a large annotated training set for our new concept of sentiment relevance. However, such a resource does not yet exist. For this reason, we investigate two semi-supervised approaches to S-relevance classification that do not require Srelevance-labeled data. The first approach is distant supervision (DS). We create an initial labeling based on domain-specific metadata that we extract from a public database and show that this improves performance by 5.8% F1 compared to a baseline. The second approach is transfer learning (TL) (Thrun, 1996). We show that TL improves F1 by 12.6% for sentiment relevance classification when we use a feature representation based on lexical taxonomies that supports knowledge transfer. In our approach, we classify sentences as S(non)relevant because this is the most fine-grained level at which S-relevance manifests itself; at the word or phrase level, S-relevance classification is not possible because of scope and context effects. However, S-relevance is also a discourse phenomenon: authors tend to structure documents into S-relevant passages and S-nonrelevant passages. To impose this discourse constraint, we employ a sequence model. We represent each document as a graph of sentences and apply a minimum cut method. The rest of the paper is structured as follows. Section 2 introduces the concept of sentiment relevance and relates it to subjectivity. In Section 3, we review previous work related to sentiment relevance. Next, we describe the methods applied in this paper (Section 4) and the features we extract (Section 5). Finally, we turn to the description and 954 results of our experiments on distant supervision (Section 6) and transfer learning (Section 7). We end with a conclusion in Section 8. 2 Sentiment Relevance Sentiment Relevance is a concept to distinguish content informative for determining the sentiment of a document from uninformative content. This is in contrast to the usual distinction between subjective and objective content. Although there is overlap between the two notions, they are different. Consider the following examples for subjective and objective sentences: (1) Subjective example: Bruce Banner, a genetics researcher with a tragic past, suffers a horrible accident. (2) Objective example: The movie won a Golden Globe for best foreign film and an Oscar. Sentence (1) is subjective because assessments like tragic past and horrible accident are subjective to the reader and writer. Sentence (2) is objective since we can check the truth of the statement. However, even though sentence (1) has negative subjective content, it is not S-relevant because it is about the plot of the movie and can appear in a glowingly positive review. Conversely, sentence (2) contributes to the positive opinion expressed by the author. Subjectivity and S-relevance are two distinct concepts that do not imply each other: Generally neutral and objective sentences can be S-relevant while certain subjective content is Snonrelevant. Below, we first describe the annotation procedure for the sentiment relevance corpus and then demonstrate empirically that subjectivity and S-relevance differ. 2.1 Sentiment Relevance Corpus For our initial experiments, we focus on sentiment relevance classification in the movie domain. To create a sentiment-relevance-annotated corpus, the SR corpus, we randomly selected 125 documents from the movie review data set (Pang et al., 2002).1 Two annotators annotated the sentences for S-relevance, using the labels SR and SNR. If no decision can be made because a sentence contains both S-relevant and S-nonrelevant linguistic material, it is marked as uncertain. We excluded 360 sentences that were labeled uncertain from the 1We used the texts from the raw HTML files since the processed version does not have capitalization. evaluation. In total, the SR corpus contains 2759 S-relevant and 728 S-nonrelevant sentences. Figure 1 shows an excerpt from the corpus. The full corpus is available online.2 First, we study agreement between human annotators. We had 762 sentences annotated for Srelevance by both annotators with an agreement (Fleiss’ κ) of .69. In addition, we obtained subjectivity annotations for the same data on Amazon Mechanical Turk, obtaining each label through a vote of three, with an agreement of κ = .61. However, the agreement of the subjectivity and relevance labelings after voting, assuming that subjectivity equals relevance, is only at κ = .48. This suggests that there is indeed a measurable difference between subjectivity and relevance. An annotator who we asked to examine the 225 examples where the annotations disagree found that 83.5% of these cases are true differences. 2.2 Contrastive Classification Experiment We will now examine the similarities of Srelevance and an existing subjectivity dataset. Pang and Lee (2004) introduced subjectivity data (henceforth P&L corpus) that consists of 5000 highly subjective (quote) review snippets from rottentomatoes.com and 5000 objective (plot) sentences from IMDb plot descriptions. We now show that although the P&L selection criteria (quotes, plot) bear resemblance to the definition of S-relevance, the two concepts are different. We use quote as S-relevant and plot as Snonrelevant data in TL. We divide both the SR and P&L corpora into training (50%) and test sets (50%) and train a Maximum Entropy (MaxEnt) classifier (Manning and Klein, 2003) with bag-ofword features. Macro-averaged F1 for the four possible training-test combinations is shown in Table 1. The results clearly show that the classes defined by the two labeled sets are different. A classifier trained on P&L performs worse by about 8% on SR than a classifier trained on SR (68.5 vs. 76.4). A classifier trained on SR performs worse by more than 20% on P&L than a classifier trained on P&L (67.4 vs. 89.7). Note that the classes are not balanced in the S-relevance data while they are balanced in the subjectivity data. This can cause a misestimation 2http://www.ims.uni-stuttgart. de/forschung/ressourcen/korpora/ sentimentrelevance/ 955 O SNR Braxton is a gambling addict in deep to Mook (Ellen Burstyn), a local bookie. S SNR Kennesaw is bitter about his marriage to a socialite (Rosanna Arquette), believing his wife to be unfaithful. S SR The plot is twisty and complex, with lots of lengthy flashbacks, and plenty of surprises. S SR However, there are times when it is needlessly complex, and at least one instance the storytelling turns so muddled that the answers to important plot points actually get lost. S SR Take a look at L. A. Confidential, or the film’s more likely inspiration, The Usual Suspects for how a complex plot can properly be handled. Figure 1: Example data from the SR corpus with subjectivity (S/O) and S-relevance (SR/SNR) annotations test P&L SR train P&L 89.7 68.5 SR 67.4 76.4 Table 1: TL/in-task F1 for P&L and SR corpora vocabulary fpSR fpSNR {actor, director, story} 0 7.5 {good, bad, great} 11.5 4.8 Table 2: % incorrect sentences containing specific words of class probabilities and lead to the experienced performance drops. Indeed, if we either balance the S-relevance data or unbalance the subjectivity data, we can significantly increase F1 to 74.8% and 77.9%, respectively, in the noisy label transfer setting. Note however that this step is difficult in practical applications if the actual label distribution is unknown. Also, in a real practical application the distribution of the data is what it is – it cannot be adjusted to the training set. We will show in Section 7 that using an unsupervised sequence model is superior to artificial manipulation of class-imbalances. An error analysis for the classifier trained on P&L shows that many sentences misclassified as S-relevant (fpSR) contain polar words; for example, Then, the situation turns bad. In contrast, sentences misclassified as S-nonrelevant (fpSNR) contain named entities or plot and movie business vocabulary; for example, Tim Roth delivers the most impressive acting job by getting the body language right. The word count statistics in Table 2 show this for three polar words and for three plot/movie business words. The P&L-trained classifier seems to have a strong bias to classify sentences with polar words as S-relevant even if they are not, perhaps because most training instances for the category quote are highly subjective, so that there is insufficient representation of less emphatic Srelevant sentences. These snippets rarely contain plot/movie-business words, so that the P&Ltrained classifier assigns almost all sentences with such words to the category S-nonrelevant. 3 Related Work Many publications have addressed subjectivity in sentiment analysis. Two important papers that are based on the original philosophical definition of the term (internal state of mind vs. external reality) are (Wilson and Wiebe, 2003) and (Riloff and Wiebe, 2003). As we argue above, if the goal is to identify parts of a document that are useful/nonuseful for sentiment analysis, then S-relevance is a better notion to use. Researchers have implicitly deviated from the philosophical definition because they were primarily interested in satisfying the needs of a particular task. For example, Pang and Lee (2004) use a minimum cut graph model for review summarization. Because they do not directly evaluate the results of subjectivity classification, it is not clear to what extent their method is able to identify subjectivity correctly. In general, it is not possible to know what the underlying concepts of a statistical classification are if no detailed annotation guidelines exist and no direct evaluation of manually labeled data is performed. Our work is most closely related to (Taboada et al., 2009) who define a fine-grained classification that is similar to sentiment relevance on the highest level. However, unlike our study, they fail to experimentally compare their classification scheme to prior work in their experiments and 956 to show that this scheme is different. In addition, they work on the paragraph level. However, paragraphs often contain a mix of S-relevant and S-nonrelevant sentences. We use the minimum cut method and are therefore able to incorporate discourse-level constraints in a more flexible fashion, giving preference to “relevance-uniform” paragraphs without mandating them. T¨ackstr¨om and McDonald (2011) develop a fine-grained annotation scheme that includes Snonrelevance as one of five categories. However, they do not use the category S-nonrelevance directly in their experiments and do not evaluate classification accuracy for it. We do not use their data set as it would cause domain mismatch between the product reviews they use and the available movie review subjectivity data (Pang and Lee, 2004) in the TL approach. Changing both the domain (movies to products) and the task (subjectivity to S-relevance) would give rise to interactions that we would like to avoid in our study. The notion of annotator rationales (Zaidan et al., 2007) has some overlap with our notion of sentiment relevance. Yessenalina et al. (2010) use rationales in a multi-level model to integrate sentence-level information into a document classifier. Neither paper presents a direct gold standard evaluation of the accuracy of rationale detection. In summary, no direct evaluation of sentiment relevance has been performed previously. One contribution in this paper is that we provide a single-domain gold standard for sentiment relevance, created based on clear annotation guidelines, and use it for direct evaluation. Sentiment relevance is also related to review mining (e.g., (Ding et al., 2008)) and sentiment retrieval techniques (e.g., (Eguchi and Lavrenko, 2006)) in that they aim to find phrases, sentences or snippets that are relevant for sentiment, either with respect to certain features or with a focus on high-precision retrieval (cf. (Liu, 2010)). However, finding a few S-relevant items with high precision is much easier than the task we address: exhaustive classification of all sentences. Another contribution is that we show that generalization based on semantic classes improves Srelevance classification. While previous work has shown the utility of other types of feature generalization for sentiment and subjectivity analysis (e.g., syntax and part-of-speech (Riloff and Wiebe, 2003)), semantic classes have so far not been exploited. Named-entity features in movie reviews were first used by Zhuang et al. (2006), in the form of feature-opinion pairs (e.g., a positive opinion about the acting). They show that recognizing plot elements (e.g., script) and classes of people (e.g., actor) benefits review summarization. We follow their approach by using IMDb to define named entity features. We extend their work by introducing methods for labeling partial uses of names and pronominal references. We address a different problem (S-relevance vs. opinions) and use different methods (graph-based and statistical vs. rulebased). T¨ackstr¨om and McDonald (2011) also solve a similar sequence problem by applying a distantly supervised classifier with an unsupervised hidden sequence component. Their setup differs from ours as our focus lies on pattern-based distant supervision instead of distant supervision using documents for sentence classification. Transfer learning has been applied previously in sentiment analysis (Tan and Cheng, 2009), targeting polarity detection. 4 Methods Due to the sequential properties of S-relevance (cf. Taboada et al. (2009)), we impose the discourse constraint that an S-relevant (resp. S-nonrelevant) sentence tends to follow an S-relevant (resp. Snonrelevant) sentence. Following Pang and Lee (2004), we use minimum cut (MinCut) to formalize this discourse constraint. For a document with n sentences, we create a graph with n + 2 nodes: n sentence nodes and source and sink nodes. We define source and sink to represent the classes S-relevance and Snonrelevance, respectively, and refer to them as SR and SNR. The individual weight ind(s, x) between a sentence s and the source/sink node x ∈{SR, SNR} is weighted according to some confidence measure for assigning it to the corresponding class. The weight on the edge from the document’s ith sentence si to its jth sentence sj is set to assoc(si, sj) = c/(j −i)2 where c is a parameter (cf. (Pang and Lee, 2004)). The minimum cut is a tradeoff between the confidence of the classification decisions and “discourse coherence”. The discourse constraint often has the effect that high-confidence labels are propagated over the se957 quence. As a result, outliers with low confidence are eliminated and we get a “smoother” label sequence. To compute minimum cuts, we use the pushrelabel maximum flow method (Cherkassky and Goldberg, 1995).3 We need to find values for multiple free parameters related to the sequence model. Supervised optimization is impossible as we do not have any labeled data. We therefore resort to a proxy measure, the run count. A run is a sequence of sentences with the same label. We set each parameter p to the value that produces a median run count that is closest to the true median run count (or, in case of a tie, closest to the true mean run count). We assume that the optimal median/mean run count is known. In practice, it can be estimated from a small number of documents. We find the optimal value of p by grid search. 5 Features Choosing features is crucial in situations where no high-quality training data is available. We are interested in features that are robust and support generalization. We propose two linguistic feature types for S-relevance classification that meet these requirements. 5.1 Generalization through Semantic Features Distant supervision and transfer learning are settings where exact training data is unavailable. We therefore introduce generalization features which are more likely to support knowledge transfer. To generalize over concepts, we use knowledge from taxonomies. A set of generalizations can be induced by making a cut in the taxonomy and defining the concepts there as base classes. For nouns, the taxonomy is WordNet (Miller, 1995) for which CoreLex (Buitelaar, 1998) gives a set of basic types. For verbs, VerbNet (Kipper et al., 2008) already contains base classes. We add for each verb in VerbNet and for each noun in CoreLex its base class or basic type as an additional feature where words tagged by the mate tagger (Bohnet, 2010) as NN.* are treated as nouns and words tagged as VB.* as verbs. For example, the verb suggest occurs in the VerbNet base class say, so we add a feature VN:say to the fea3using the HIPR tool (www.avglab.com/andrew/ soft.html) ture representation. We refer to these feature sets as CoreLex (CX) and VerbNet (VN) features and to their combination as semantic features (SEM). 5.2 Named Entities As standard named entity recognition (NER) systems do not capture categories that are relevant to the movie domain, we opt for a lexicon-based approach similar to (Zhuang et al., 2006). We use the IMDb movie metadata database4 from which we extract names for the categories <ACTOR>, <PERSONNEL> (directors, screenwriters, and composers), and <CHARACTER> (movie characters). Many entries are unsuitable for NER, e.g., dog is frequently listed as a character. We filter out all words that also appear in lower case in a list of English words extracted from the dict.cc dictionary.5 A name n can be ambiguous between the categories (e.g., John Williams). We disambiguate by calculating the maximum likelihood estimate of p(c|n) = f(n,c) P c′ f(n,c′) where c is one of the three categories and f(n, c) is the number of times n occurs in the database as a member of category c. We also calculate these probabilities for all tokens that make up a name. While this can cause false positives, it can help in many cases where the name obviously belongs to a category (e.g., Skywalker in Luke Skywalker is very likely a character reference). We always interpret a name preceding an actor in parentheses as a character mention, e.g., Reese Witherspoon in Tracy Flick (Reese Witherspoon) is an overachiever [...] This way, we can recognize character mentions for which IMDb provides insufficient information. In addition, we use a set of simple rules to propagate annotations to related terms. If a capitalized word occurs, we check whether it is part of an already recognized named entity. For example, if we encounter Robin and we previously encountered Robin Hood, we assume that the two entities match. Personal pronouns will match the most recently encountered named entity. This rule has precedence over NER, so if a name matches a labeled entity, we do not attempt to label it through NER. The aforementioned features are encoded as binary presence indicators for each sentence. This 4www.imdb.com/interfaces/ 5dict.cc 958 feature set is referred to as named entities (NE). 5.3 Sequential Features Following previous sequence classification work with Maximum Entropy models (e.g., (Ratnaparkhi, 1996)), we use selected features of adjacent sentences. If a sentence contains a feature F, we add the feature F+1 to the following sentence. For example, if a <CHARACTER> feature occurs in a sentence, <CHARACTER+1> is added to the following sentence. For S-relevance classification, we perform this operation only for NE features as they are restricted to a few classes and thus will not enlarge the feature space notably. We refer to this feature set as sequential features (SQ). 6 Distant Supervision Since a large labeled resource for sentiment relevance classification is not yet available, we investigate semi-supervised methods for creating sentiment relevance classifiers. In this section, we show how to bootstrap a sentiment relevance classifier by distant supervision (DS) . Even though we do not have sentiment relevance annotations, there are sources of metadata about the movie domain that we can leverage for distant supervision. Specifically, movie databases like IMDb contain both metadata about the plot, in particular the characters of a movie, and metadata about the “creators” who were involved in the production of the movie: actors, writers, directors, and composers. On the one hand, statements about characters usually describe the plot and are not sentiment relevant and on the other hand, statements about the creators tend to be evaluations of their contributions – positive or negative – to the movie. We formulate a classification rule based on this observation: Count occurrences of NE features and label sentences that contain a majority of creators (and tied cases) as SR and sentences that contain a majority of characters as SNR. This simple labeling rule covers 1583 sentences with an F1 score of 67.2% on the SR corpus. We call these labels inferred from NE metadata distant supervision (DS) labels. This is a form of distant supervision in that we use the IMDb database as described in Section 5 to automatically label sentences based on which metadata from the database they contain. To increase coverage, we train a Maximum Entropy (MaxEnt) classifier (Manning and Klein, 2003) on the labels. The MaxEnt model achieves an F1 of 61.2% on the SR corpus (Table 3, line 2). As this classifier uses training data that is biased towards a specialized case (sentences containing the named entity types creators and characters), it does not generalize well to other S-relevance problems and thus yields lower performance on the full dataset. This distant supervision setup suffers from two issues. First, the classifier only sees a subset of examples that contain named entities, making generalization to other types of expressions difficult. Second, there is no way to control the quality of the input to the classifier, as we have no confidence measure for our distant supervision labeling rule. We will address these two issues by introducing an intermediate step, the unsupervised sequence model introduced in Section 4. As described in Section 4, each document is represented as a graph of sentences and weights between sentences and source/sink nodes representing SR/SNR are set to the confidence values obtained from the distantly trained MaxEnt classifier. We then apply MinCut as described in the following paragraphs and select the most confident examples as training material for a new classifier. 6.1 MinCut Setup We follow the general MinCut setup described in Section 4. As explained above, we assume that creators and directors indicate relevance and characters indicate nonrelevance. Accordingly, we define nSR to be the number of <ACTOR> and <PERSONNEL> features occurring in a sentence, and nSNR the number of <CHARACTER> features. We then set the individual weight between a sentence and the source/sink nodes to ind(s, x) = nx where x ∈{SR, SNR}. The MinCut parameter c is set to 1; we wish to give the association scores high weights as there might be long spans that have individual weights with zero values. 6.2 Confidence-based Data Selection We use the output of the base classifier to train supervised models. Since the MinCut model is based on a weak assumption, it will make many false decisions. To eliminate incorrect decisions, we only use documents as training data that were labeled with high confidence. As the confidence measure for a document, we use the maximum flow value f – the “amount of fluid” flowing through the document. The max-flow min-cut theorem (Ford and Fulkerson, 1956) implies that if the flow value 959 Model Features FSR FSNR Fm 1 Majority BL – 88.3 0.0 44.2 2 MaxEnt (DSlabels) NE 79.8 42.6 61.21 3 DSlabels+MinCut NE 79.6 48.2 63.912 4 DS MaxEnt NE 84.8 46.4 65.612 5 DS MaxEnt NE+SEM 85.2 48.0 66.6124 6 DS CRF NE 83.4 49.5 66.412 7 DS MaxEnt NE+SQ 84.8 49.2 67.01234 8 DS MaxEnt NE+SQ+SEM 84.5 49.1 66.81234 Table 3: Classification results: FSR (S-relevant F1), FSNR (S-nonrelevant F1), and Fm (macro-averaged F1). Superscript numbers indicate a significant improvement over the corresponding line. is low, then the cut was found more quickly and thus can be easier to calculate; this means that the sentence is more likely to have been assigned to the correct segment. Following this assumption, we train MaxEnt and Conditional Random Field (CRF, (McCallum, 2002)) classifiers on the k% of documents that have the lowest maximum flow values f, where k is a parameter which we optimize using the run count method introduced in Section 4. 6.3 Experiments and Results Table 3 shows S-relevant (FSR), S-nonrelevant (FSNR) and macro average (Fm) F1 values for different setups with this parameter. We compare the following setups: (1) The majority baseline (BL) i.e., choosing the most frequent label (SR). (2) a MaxEnt baseline trained on DS labels without application of MinCut; (3) the base classifier using MinCut (DSlabels+MinCut) as described above. Conditions 4-8 train supervised classifiers based on the labels from DSlabels+MinCut: (4) MaxEnt with named entities (NE); (5) MaxEnt with NE and semantic (SEM) features; (6) CRF with NE; (7) MaxEnt with NE and sequential (SQ) features; (8) MaxEnt with NE, SQ, and SEM. We test statistical significance using the approximate randomization test (Noreen, 1989) on documents with 10,000 iterations at p < .05. We achieve classification results above baseline using the MinCut base classifier (line 3) and a considerable improvement through distant supervision. We found that all classifiers using DS labels and Mincut are significantly better than MaxEnt trained on purely rule-based DS labels (line 2). Also, the MaxEnt models using SQ features (lines 7,8) are significantly better than the MinCut base classifier (line 3). For comparison to a chain-based sequence model, we train a CRF (line 6); however, the improvement over MaxEnt (line 4) is not significant. We found that both semantic (lines 5,8) and sequential (lines 7,8) features help to improve the classifier. The best model (line 7) performs better than MinCut (3) by 3.1% and better than training on purely rule-generated DS labels (line 2) by 5.8%. However, we did not find a cumulative effect (line 8) of the two feature sets. Generally, the quality of NER is crucial in this task. While IMDb is in general a thoroughly compiled database, it is not perfect. For example, all main characters in Groundhog Day are listed with their first name only even though the full names are given in the movie. Also, some entries are intentionally incomplete to avoid spoiling the plot. The data also contains ambiguities between characters and titles (e.g., Forrest Gump) that are impossible to resolve with our maximum likelihood method. In some types of movies, e.g., documentaries, the distinction between characters and actors makes little sense. Furthermore, ambiguities like occurrences of common names such as John are impossible to resolve if there is no earlier full referring expression (e.g., John Williams). Feature analysis for the best model using DS labels (7) shows that NE features are dominant. This correlation is not surprising as the seed labels were induced based on NE features. Interestingly, some subjective features, e.g., horrible have high weights for S-nonrelevance, as they are associated with non-relevant content such as plot descriptions. To summarize, the results of our experiments using distant supervision show that a sentiment relevance classifier can be trained successfully by labeling data with a few simple feature rules, with 960 MinCut-based input significantly outperforming the baseline. Named entity recognition, accomplished with data extracted from a domain-specific database, plays a significant rule in creating an initial labeling. 7 Transfer Learning To address the problem that we do not have enough labeled SR data we now investigate a second semi-supervised method for SR classification, transfer learning (TL). We will use the P&L data (introduced in Section 2.2) for training. This data set has labels that are intended to be subjectivity labels. However, they were automatically created using heuristics and the resulting labels can be either viewed as noisy SR labels or noisy subjectivity labels. Compared to distant supervision, the key advantage of training on P&L is that the training set is much larger, containing around 7 times as much data. In TL, the key to success is to find a generalized feature representation that supports knowledge transfer. We use a semantic feature generalization method that relies on taxonomies to introduce such features. We again use MinCut to impose discourse constraints. This time, we first classify the data using a supervised classifier and then use MinCut to smooth the sequences. The baseline (BL) uses a simple bag-of-words representation of sentences for classification which we then extend with semantic features. 7.1 MinCut Setup We again implement the basic MinCut setup from Section 4. We set the individual weight ind(s, x) on the edge between sentence s and class x to the estimate p(x|s) returned by the supervised classifier. The parameter c of the MinCut model is tuned using the run count method described in Section 4. 7.2 Experiments and Results As we would expect, the baseline performance of the supervised classifier on SR is low: 69.9% (Table 4, line 1). MinCut significantly boosts the performance by 7.9% to 77.5% (line 1), a result similar to (Pang and Lee, 2004). Adding semantic features improves supervised classification significantly by 5.7% (75.6% on line 4). When MinCut and both types of semantic features are used together, these improvements are partially cumula0.2 0.4 0.6 0.8 1.0 2 4 6 8 10 c run length 77 78 79 80 81 82 F1 median run count mean run count F1 Figure 2: F1 measure for different values of c. Horizontal line: optimal median run count. Circle: selected point. tive: an improvement over the baseline by 12.6% to 82.5% (line 4). We also experiment with a training set where an artificial class imbalance is introduced, matching the 80:20 imbalance of SR:SNR in the S-relevance corpus. After applying MinCut, we find that while the results for BL with and without imbalances does not differ significantly. However, models using CX and VN features and imbalances are actually significantly inferior to the respective balanced versions. This result suggests that MinCut is more effective at coping with class imbalances than artificial balancing. MinCut and semantic features are successful for TL because both impose constraints that are more useful in a setup where noise is a major problem. MinCut can exploit test set information without supervision as the MinCut graph is built directly on each test set review. If high-confidence information is “seeded” within a document and then spread to neighbors, mistakes with low confidence are corrected. This way, MinCut also leads to a compensation of different class imbalances. The results are evidence that semantic features are robust to the differences between subjectivity and S-relevance (cf. Section 2). In the CX+VN model, meaningful feature classes receive high weights, e.g., the human class from CoreLex which contains professions that are frequently associated with non-relevant plot descriptions. To illustrate the run-based parameter optimization criterion, we show F1 and median/mean run lengths for different values of c for the best TL 961 Model base classifier MinCut FSR FSNR Fm FSR FSNR Fm 1 BL 81.1 58.6 69.9 87.2 67.8 77.5B 2 CX 82.9 60.1 71.5B 89.0 70.3 79.7BM 3 VN 85.6 62.1 73.9B 91.4 73.6 82.5BM 4 CX+VN 88.3 62.9 75.6B 92.7 72.2 82.5BM Table 4: Classification results: FSR (S-relevant F1), FSNR (S-nonrelevant F1), and Fm (macro-averaged F1). B indicates a significant improvement over the BL base classifier (69.9), M over BL MinCut (77.5). setting (line 4) in Figure 2. Due to differences in the base classifier, the optimum of c may vary between the experiments. A weaker base classifier may yield a higher weight on the sequence model, resulting in a larger c. The circled point shows the data point selected through optimization. The optimization criterion does not always correlate perfectly with F1. However, we find no statistically significant difference between the selected result and the highest F1 value. These experiments demonstrate that Srelevance classification improves considerably through TL if semantic feature generalization and unsupervised sequence classification through MinCut are applied. 8 Conclusion A number of different notions, including subjectivity, have been proposed for distinguishing parts of documents that convey sentiment from those that do not. We introduced sentiment relevance to make this distinction and argued that it better reflects the requirements of sentiment analysis systems. Our experiments demonstrated that sentiment relevance and subjectivity are related, but different. To enable other researchers to use this new notion of S-relevance, we have published the annotated S-relevance corpus used in this paper. Since a large labeled sentiment relevance resource does not yet exist, we investigated semisupervised approaches to S-relevance classification that do not require S-relevance-labeled data. We showed that a combination of different techniques gives us the best results: semantic generalization features, imposing discourse constraints implemented as the minimum cut graph-theoretic method, automatic “distant” labeling based on a domain-specific metadata database and transfer learning to exploit existing labels for a related classification problem. In future work, we plan to use sentiment relevance in a downstream task such as review summarization. Acknowledgments This work was funded by the DFG through the Sonderforschungsbereich 732. We thank Charles Jochim, Wiltrud Kessler, and Khalid Al Khatib for many helpful comments and discussions. References Bernd Bohnet. 2010. Top accuracy and fast dependency parsing is not a contradiction. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pages 89–97, Beijing, China, August. Coling 2010 Organizing Committee. P. Buitelaar. 1998. CoreLex: systematic polysemy and underspecification. Ph.D. thesis, Brandeis University. B. Cherkassky and A. Goldberg. 1995. On implementing push-relabel method for the maximum flow problem. Integer Programming and Combinatorial Optimization, pages 157–171. X. Ding, B. Liu, and P. S. Yu. 2008. A holistic lexiconbased approach to opinion mining. In WSDM 2008, pages 231–240. K. Eguchi and V. Lavrenko. 2006. Sentiment retrieval using generative models. In EMNLP 2006, pages 345–354. L.R. Ford and D.R. Fulkerson. 1956. Maximal flow through a network. Canadian Journal of Mathematics, 8(3):399–404. K. Kipper, A. Korhonen, N. Ryant, and M. Palmer. 2008. A large-scale classification of English verbs. Language Resources and Evaluation, 42(1):21–40. B. Liu. 2010. Sentiment analysis and subjectivity. Handbook of Natural Language Processing, pages 978–1420085921. C. Manning and D. Klein. 2003. Optimization, maxent models, and conditional estimation without magic. In NAACL-HLT 2003: Tutorials, page 8. 962 A.K. McCallum. 2002. Mallet: A machine learning for language toolkit. G.A. Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39– 41. E.W. Noreen. 1989. Computer Intensive Methods for Hypothesis Testing: An Introduction. Wiley. B. Pang and L. Lee. 2004. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In ACL 2004, pages 271– 278. B. Pang, L. Lee, and S. Vaithyanathan. 2002. Thumbs up?: sentiment classification using machine learning techniques. In ACL-EMNLP 2002, pages 79–86. A.M. Popescu and O. Etzioni. 2005. Extracting product features and opinions from reviews. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, pages 339–346. Association for Computational Linguistics. A. Ratnaparkhi. 1996. A maximum entropy model for part-of-speech tagging. In Proceedings of the conference on empirical methods in natural language processing, volume 1, pages 133–142. E. Riloff and J. Wiebe. 2003. Learning extraction patterns for subjective expressions. In EMNLP 2003, pages 105–112. M. Taboada, J. Brooke, and M. Stede. 2009. Genrebased paragraph classification for sentiment analysis. In SIGdial 2009, pages 62–70. O. T¨ackstr¨om and R. McDonald. 2011. Discovering fine-grained sentiment with latent variable structured prediction models. In ECIR 2011, pages 368– 374. S. Tan and X. Cheng. 2009. Improving SCL model for sentiment-transfer learning. In ACL 2009, pages 181–184. S. Thrun. 1996. Is learning the n-th thing any easier than learning the first? In NIPS 1996, pages 640– 646. T. Wilson and J. Wiebe. 2003. Annotating opinions in the world press. In 4th SIGdial Workshop on Discourse and Dialogue, pages 13–22. A. Yessenalina, Y. Yue, and C. Cardie. 2010. Multilevel structured models for document-level sentiment classification. In EMNLP 2010, pages 1046– 1056. O. Zaidan, J. Eisner, and C. Piatko. 2007. Using annotator rationales to improve machine learning for text categorization. In NAACL-HLT 2007, pages 260– 267. L. Zhuang, F. Jing, and X. Zhu. 2006. Movie review mining and summarization. In CIKM 2006, pages 43–50. 963
2013
94
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 964–972, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Predicting and Eliciting Addressee’s Emotion in Online Dialogue Takayuki Hasegawa∗ GREE Inc. Minato-ku, Tokyo 106-6101, Japan [email protected] Nobuhiro Kaji and Naoki Yoshinaga Institute of Industrial Science, the University of Tokyo Meguro-ku, Tokyo 153-8505, Japan {kaji,ynaga}@tkl.iis.u-tokyo.ac.jp Masashi Toyoda Institute of Industrial Science, the University of Tokyo Meguro-ku, Tokyo 153-8505, Japan [email protected] Abstract While there have been many attempts to estimate the emotion of an addresser from her/his utterance, few studies have explored how her/his utterance affects the emotion of the addressee. This has motivated us to investigate two novel tasks: predicting the emotion of the addressee and generating a response that elicits a specific emotion in the addressee’s mind. We target Japanese Twitter posts as a source of dialogue data and automatically build training data for learning the predictors and generators. The feasibility of our approaches is assessed by using 1099 utterance-response pairs that are built by five human workers. 1 Introduction When we have a conversation, we usually care about the emotion of the person to whom we speak. For example, we try to cheer her/him up if we find out s/he feels down, or we avoid saying things that would trouble her/him. To date, the modeling of emotion in a dialogue has extensively been studied in NLP as well as related areas (Forbes-Riley and Litman, 2004; Ayadi et al., 2011). However, the past attempts are virtually restricted to estimating the emotion of an addresser1 from her/his utterance. In contrast, few studies have explored how the emotion of the addressee is affected by the utterance. We consider the insufficiency of such research to be fatal for ∗This work was conducted while the first author was a graduate student at the University of Tokyo. 1We use the terms addresser/addressee rather than a speaker/listener, because we target not spoken but online dialogue. I have had a high fever for 3 days. JOY I hope you feel better soon. I have had a high fever for 3 days. SADNESS Sorry, but you can’t join us today. Figure 1: Two example pairs of utterances and responses. Those responses elicit certain emotions, JOY or SADNESS, in the addressee’s mind. The addressee in this example refers to the left-hand user, who receives the response. computers to support human-human communications or to provide a communicative man-machine interface. With this motivation in mind, the paper investigates two novel tasks: (1) prediction of the addressee’s emotion and (2) generation of the response that elicits a prespecified emotion in the addressee’s mind.2 In the prediction task, the system is provided with a dialogue history. For simplicity, we consider, as a history, an utterance and a response to it (Figure 1). Given the history, the system predicts the addressee’s emotion that will be caused by the response. For example, the system outputs JOY when the response is I hope you feel better soon, while it outputs SADNESS when the response is Sorry, but you can’t join us today 2We adopt Plutchik (1980)’s eight emotional categories in both tasks. 964 (Figure 1). In the generation task, on the other hand, the system is provided with an utterance and an emotional category such as JOY or SADNESS, which is referred to as goal emotion. Then the system generates the response that elicits the goal emotion in the addressee’s mind. For example, I hope you feel better soon is generated as a response to I have had a high fever for 3 days when the goal emotion is specified as JOY, while Sorry, but you can’t join us today is generated for SADNESS (Figure 1). Systems that can perform the two tasks not only serve as crucial components of dialogue systems but also have interesting applications of their own. Predicting the emotion of an addressee is useful for filtering flames or infelicitous expressions from online messages (Spertus, 1997). The response generator that is aware of the emotion of an addressee is also useful for text completion in online conversation (Hasselgren et al., 2003; Pang and Ravi, 2012). This paper explores a data-driven approach to performing the two tasks. With the recent emergence of social media, especially microblogs, the amount of dialogue data available is rapidly increasing. Therefore, we are taking this opportunity to building large-scale training data from microblog posts automatically. This approach allows us to perform the two tasks in a large-scale with little human effort. We employ standard classifiers for predicting the emotion of an addressee. Our contribution here is to investigate the effectiveness of new features that cannot be used in ordinary emotion recognition, the task of estimating the emotion of a speaker (or writer) from her/his utterance (or writing) (Ayadi et al., 2011; Bandyopadhyay and Okumura, 2011; Balahur et al., 2011; Balahur et al., 2012). We specifically extract features from the addressee’s last utterance (e.g., I have had a high fever for 3 days in Figure 1) and explore the effectiveness of using such features. Such information is characteristic of a dialogue situation. To perform the generation task, we build a statistical response generator by following (Ritter et al., 2011). To improve on the previous study, we investigate a method for controlling the contents of the response for, in our case, eliciting the goal emotion. We achieve this by using a technique inspired by domain adaptation. We learn multiple models, each of which is adapted for eliciting one specific emotion. Also, we perform model interpolation for addressing data sparseness. In our experiment, we automatically build training data consisting of over 640 million dialogues from Japanese Twitter posts. Using this data set, we train the classifiers that predict the emotion of an addressee, and the response generators that elicit the goal emotion. We evaluate our methods on the test data that are built by five human workers, and confirm the feasibility of the proposed approaches. 2 Emotion-tagged Dialogue Corpus The key in making a supervised approach to predicting and eliciting addressee’s emotion successful is to obtain large-scale, reliable training data effectually. We thus automatically build a largescale emotion-tagged dialogue corpus from microblog posts, and use it as the training data in the prediction and generation tasks. This section describes a method for constructing the emotion-tagged dialogue corpus. We first describe how to extract dialogues from posts in Twitter, a popular microblogging service. We then explain how to automatically annotate utterances in the extracted dialogues with the addressers’ emotions by using emotional expressions as clues. 2.1 Mining dialogues from Twitter We have first crawled utterances (posts) from Twitter by using the Twitter REST API.3 The crawled data consist of 5.5 billion utterances in Japanese tweeted by 770 thousand users from March 2011 to December 2012. We next cleaned up the crawled utterances by handling Twitterspecific expressions; we replaced all URL strings to ‘URL’, excluded utterances with the symbols that indicate the re-posting (RT) or quoting (QT) of others’ tweets, and erased @user name appearing at the head and tail of the utterances, since they are usually added to make a reply. We excluded utterances given by any user whose name included ‘bot.’ We then extracted dialogues from the resulting utterances, assuming that a series of utterances interchangeably made by two users form a dialogue. We here exploited ‘in reply to status id’ field of each utterance provided by Twitter REST API to link to the other, if any, utterance to which it replied. 3https://dev.twitter.com/docs/api/ 965 # users 672,937 # dialogues 311,541,839 # unique utterances 1,007,403,858 ave. # dialogues / user 463.0 ave. # utterances / user 1497.0 ave. # utterances / dialogue 3.2 Table 1: Statistics of dialogues extracted from Twitter. 2,000,000 40,000,000 60,000,000 80,000,000 100,000,000 120,000,000 140,000,000 160,000,000 180,000,000 0 2 3 4 5 6 7 8 9 10 11+ # Dialogues Dialogue length (# utterances in dialogue) Figure 2: The number of dialogues plotted against the dialogue length. Utterance Emotion A: Would you like to go for dinner with me? B: Sorry, I can’t. I have a fever of 38 degrees. A: Oh dear. I hope you feel better soon. SURPRISE B: Thanks. I’m happy to hear you say that. JOY Table 2: An illustration of an emotion-tagged dialogue: The first column shows a dialogue (a series of utterances interchangeably made by two users), while the second column shows the addresser’s emotion estimated from the utterance. Table 1 lists the statistics of the extracted dialogues, while Figure 2 plots the number of dialogues plotted against the dialogue length (the number of utterances in dialogue). Most dialogues (98.2%) consist of at most 10 utterances, although the longest dialogue includes 1745 utterances and spans more than six weeks. 2.2 Tagging utterances with addressers’ emotions We then automatically labeled utterances in the obtained dialogues with the addressers’ emotions by using emotional expressions as clues (Table 2). In this study, we have adopted Plutchik (1980)’s eight emotional categories (ANGER, ANTICIPATION, DISGUST, FEAR, JOY, SADNESS, SURPRISE, and TRUST) as the targets to label, and manually tailored around ten emotional expressions for each emotional category. Table 3 lists examples of the emotional expressions, while the Emotion Emotional expressions ANGER frustrating, irritating, nonsense ANTICIPATION exciting, expecting, looking forward DISGUST disgusting, unpleasant, hate FEAR afraid, anxious, scary JOY glad, happy, delighted SADNESS sad, lonely, unhappy SURPRISE surprised, oh dear, wow TRUST relieved, reliable, solid Table 3: Example of clue emotional expressions. Emotion # utterances Precision Worker A Worker B ANGER 190,555 0.95 0.95 ANTICIPATION 2,548,706 0.99 0.99 DISGUST 475,711 0.93 0.93 FEAR 2,671,222 0.96 0.96 JOY 2,725,235 0.94 0.96 SADNESS 712,273 0.97 0.97 SURPRISE 975,433 0.97 0.97 TRUST 359,482 0.97 0.98 Table 4: Size and precision of utterances labeled with the addressers’ emotions. rest are mostly their spelling variations.4 Because precise annotation is critical in the supervised learning scenario, we annotate utterances with the addressers’ emotions only when the emotional expressions do not: 1. modify content words. 2. accompany an expression of negation, conditional, imperative, interrogative, concession, or indirect speech in the same sentence. For example, I saw a frustrated teacher is rejected by the first condition, while I’ll be happy if it rains is rejected by the second condition. The second condition was judged by checking whether the sentence includes trigger expressions such as ‘ない(not/never)’, ‘たら(if-clause)’, ‘?’, ‘けど ((al)though)’, and ‘と(that-clause)’. Table 4 lists the size and precision of the utterances labeled with the addressers’ emotions. Two human workers measured the precision of the annotation by examining 100 labeled utterances randomly sampled for each emotional category. The inter-rater agreement was κ = 0.85, indicating almost perfect agreement. The precision of the annotation exceeded 0.95 for most of the emotional categories. 4Note that the clue emotional expressions are languagespecific but can be easily tailored for other languages. Here, Japanese emotional expressions are translated into English to widen the potential readership of the paper. 966 3 Predicting Addressee’s Emotion This section describes a method for predicting emotion elicited in an addressee when s/he receives a response to her/his utterance. The input to this task is a pair of an utterance and a response to it, e.g., the two utterances in Figure 1, while the output is the addressee’s emotion among the emotional categories of Plutchik (1980) (JOY and SADNESS for the top and bottom dialogues in Figure 1, respectively). Although a response could elicit multiple emotions in the addressee, in this paper we focus on predicting the most salient emotion elicited in the addressee and cast the prediction as a single-label multi-class classification problem.5 We then construct a one-versus-the-rest classifier6 by combining eight binary classifiers, each of which predicts whether the response elicits each emotional category. We use online passive-aggressive algorithm to train the eight binary classifiers. We exploit the emotion-tagged dialogue corpus constructed in Section 2 to collect training examples for the prediction task. For each emotiontagged utterance in the corpus, we assume that the tagged emotion is elicited by the (last) response. We thereby extract the pair of utterances preceding the emotion-tagged utterance and the tagged emotion as one training example. Taking the dialogue in Table 2 as an example, we obtain one training example from the first two utterances and SURPRISE as the emotion elicited in user A. We extract all the n-grams (n ≤3) in the response to induce (binary) n-gram features. The extracted n-grams could indicate a certain action that elicits a specific emotion (e.g., ‘have a fever’ in Table 2), or a style or tone of speaking (e.g., ‘Sorry’). Likewise, we extract word n-grams from the addressee’s utterance. The extracted n-grams activate another set of binary n-gram features. Because word n-grams themselves are likely to be sparse, we estimate the addressers’ emotions from their utterances and exploit them to induce emotion features. The addresser’s emotion has been reported to influence the addressee’s emotion 5Because microblog posts are short, we expect emotions elicited by a response post not to be very diverse and a multiclass classification to be able to capture the essential crux of the prediction task. 6We should note that a one-versus-the-rest classifier can be used in the multi-label classification scenario, just by allowing the classifier to output more than one emotional category (Ghamrawi and McCallum, 2005). strongly (Kim et al., 2012), while the addressee’s emotion just before receiving a response can be a reference to predict her/his emotion in question after receiving the response. To induce emotion features, we exploit the rulebased approach used in Section 2.2 to estimate the addresser’s emotion. Since the rule-based approach annotates utterances with emotions only when they contain emotional expressions, we independently train for each emotional category a binary classifier that estimates the addresser’s emotion from her/his utterance and apply it to the unlabeled utterances. The training data for these classifiers are the emotion-tagged utterances obtained in Section 2, while the features are n-grams (n ≤3)7 in the utterance. We should emphasize that the features induced from the addressee’s utterance are unique to this task and are hardly available in the related tasks that predicted the emotion of a reader of news articles (Lin and Hsin-Yihn, 2008) or personal stories (Socher et al., 2011). We will later confirm the impact of these features on the prediction accuracy in the experiments. 4 Eliciting Addressee’s Emotion This section presents a method for generating a response that elicits the goal emotion, which is one of the emotional categories of Plutchik (1980), in the addressee. In section 4.1, we describe a statistical framework for response generation proposed by (Ritter et al., 2011). In section 4.2, we present how to adapt the model in order to generate a response that elicits the goal emotion in the addressee. 4.1 Statistical response generation Following (Ritter et al., 2011), we apply the statistical machine translation model for generating a response to a given utterance. In this framework, a response is viewed as a translation of the input utterance. Similar to ordinary machine translation systems, the model is learned from pairs of an utterance and a response by using off-the-shelf tools for machine translation. We use GIZA++8 and SRILM9 for learning translation model and 5-gram language model, re7We have excluded n-grams that matched the emotional expressions used in Section 2 to avoid overfitting. 8http://code.google.com/p/giza-pp/ 9http://www.speech.sri.com/projects/ srilm/ 967 spectively. As post-processing, some phrase pairs are filtered out from the translation table as follows. When GIZA++ is directly applied to dialogue data, it frequently finds paraphrase pairs, learning to parrot back the input (Ritter et al., 2011). To avoid using such pairs for response generation, a phrase pair is removed if one phrase is the substring of the other. We use Moses decoder10 to search for the best response to a given utterance. Unlike machine translation, we do not use reordering models, because the positions of phrases are not considered to correlate strongly with the appropriateness of responses (Ritter et al., 2011). In addition, we do not use any discriminative training methods such as MERT for optimizing the feature weights (Och, 2003). They are set as default values provided by Moses (Ritter et al., 2011). 4.2 Model adaptation The above framework allows us to generate appropriate responses to arbitrary input utterances. On top of this framework, we have developed a response generator that elicits a specific emotion. We use the emotion-tagged dialogue corpus to learn eight translation models and language models, each of which is specialized in generating the response that elicits one of the eight emotions (Plutchik, 1980). Specifically, the models are learned from utterances preceding ones that are tagged with emotional category. As an example, let us examine to learn models for eliciting SURPRISE from the dialogue in Table 2. In this case, the first two utterances are used to learn the translation model, while only the second utterance is used to learn the language model. However, this simple approach is prone to suffer from the data sparseness problem. Because not all the utterances are tagged with the emotion in emotion-tagged dialogue corpus, only a small fraction of utterances can be used for learning the adapted models. We perform model interpolation for addressing this problem. In addition to the adapted models described above, we also use a general model, which is learned from the entire corpus. The two models are then merged as the weighted linear interpolation. Specifically, we use tmcombine.py script provided by Moses for the interpolation of trans10http://www.statmt.org/moses/ lation models (Sennrich, 2012). For all the four features (i.e., two phrase translation probabilities and two lexical weights) derived from translation model, the weights of the adapted model are equally set as α (0 ≤α ≤1.0). On the other hand, we use SRILM for the interpolation of language models. The weight of the adapted model is set as β (0 ≤β ≤1.0). The parameters α and β control the strength of the adapted models. Only adapted models are used when α (or β) = 1.0, while the adapted models are not at all used when α (or β) = 0. When both α and β are specified as 0, the model becomes equivalent to the original one described in section 4.1. 5 Experiments 5.1 Test data To evaluate the proposed method, we built, as test data, sets of an utterance paired with responses that elicit a certain goal emotion (Table 5). Note that they were used for evaluation in both of the two tasks. Each utterance in the test data has more than one responses that elicit the same goal emotion, because they are used to compute BLEU score (see section 5.3). The data set was built in the following manner. We first asked five human worker to produce responses to 80 utterances (10 utterances for each goal emotion). Note that the 80 utterances do not have overlap between workers and that the worker produced only one response to each utterance. To alleviate the burden on the workers, we actually provided each worker with the utterances in the emotion-tagged corpus. Then we asked each worker to select 80 utterances to which s/he thought s/he could easily respond. The selected utterances were removed from the corpus during training. As a result, we obtained 400 utterance-response pairs (= 80 utterance-response pairs × 5 workers). For each of those 400 utterances, two additional responses are produced. We did not allow the same worker to produce more than one response to the same utterance. In this way, we obtained 1200 responses for the 400 utterances in total. Finally, we assessed the data quality to remove responses that were unlikely to elicit the goal emotion. For each utterance-response pair, we asked two workers to judge whether the response elicited the goal emotion. If both workers regarded the 968 Goal emotion: JOY U: 16 歳になりました,これからもよろしくお願 いします! (I’m turning 16. Hope to get along with you as well as ever!) R1: 誕生日おめでとうございます! (Happy birthday!) R2: おめでとう!今度誕生日プレゼントあげるね. (Congratulations! I’ll give you a birthday present.) R3: おめでとうー!!幸せな一年を! (Congratulations! I hope you have a happy year!) Table 5: Example of the test data. English translations are attached in the parenthesis. Emotion # utterance pairs ANGER 119,881 ANTICIPATION 1,416,847 DISGUST 333,972 FEAR 1,662,998 JOY 1,724,198 SADNESS 436,668 SURPRISE 589,790 TRUST 228,974 GENERAL 646,429,405 Table 6: The number of utterance pairs used for training classifiers in emotion prediction and learning the translation models and language models in response generation. response as inappropriate, it was removed from the data. The resulting test data consist of 1099 utterance-response pairs for 396 utterances. This data set is submitted as supplementary material to support the reproducibility of our experimental results. 5.2 Prediction task We first report experimental results on predicting the addressee’s emotion within a dialogue. Table 6 lists the number of utterance-response pairs used to train eight binary classifiers for individual emotional categories, which form a one-versus-the rest classifier for the prediction task. We used opal11 as an implementation of online passive-aggressive algorithm to train the individual classifiers. To investigate the impact of the features that are uniquely available in a dialogue data, we compared classifiers trained with the following two sets of features in terms of precision, recall, and F1 for each emotional category. RESPONSE The n-gram and emotion features induced from the response. 11http://www.tkl.iis.u-tokyo.ac.jp/ ∼ynaga/opal/. Emotion RESPONSE RESPONSE/UTTER. PREC REC F1 PREC REC F1 ANGER 0.455 0.476 0.465 0.600 0.548 0.573 ANTICIPA. 0.518 0.526 0.522 0.614 0.637 0.625 DISGUST 0.275 0.519 0.359 0.378 0.511 0.435 FEAR 0.484 0.727 0.581 0.459 0.706 0.556 JOY 0.690 0.417 0.519 0.720 0.590 0.649 SADNESS 0.711 0.467 0.564 0.670 0.562 0.611 SURPRISE 0.511 0.348 0.414 0.584 0.437 0.500 TRUST 0.695 0.452 0.548 0.682 0.514 0.586 average 0.542 0.492 0.497 0.588 0.563 0.567 Table 7: Predicting addressee’s emotion: Results. PREDICTED EMOTION ANGER ANTICIPA. DISGUST FEAR JOY SADNESS SURPRISE TRUST total ANGER 69 0 26 20 0 8 2 1 126 ANTICIPA. 1 86 11 7 13 0 6 11 135 DISGUST 25 1 68 18 2 8 7 4 133 FEAR 3 0 22 101 1 5 9 2 143 JOY 1 28 9 4 85 1 7 9 144 SADNESS 6 3 25 14 5 77 5 2 137 SURPRISE 7 10 9 32 5 7 59 6 135 TRUST 3 12 10 24 7 9 6 75 146 CORRECT EMOTION total 115 140 180 220 118 115 101 110 1099 Table 8: Confusion matrix of predicting addressee’s emotion, with mostly predicted emotions bold-faced and mostly confused emotions underlined for each emotional category. RESPONSE/UTTER. The n-gram and emotion features induced from the response and the addressee’s utterance. Table 7 lists prediction results. We can see that the features induced from the addressee’s utterance significantly improved the prediction performance, F1, for emotions other than FEAR. FEAR is elicited instantly by the response, and the features induced from the addressee’s utterance thereby confused the classifier. Table 8 shows a confusion matrix of the classifier using all the features, with mostly predicted emotions bold-faced and mostly confused emotions underlined for each emotional category. We can find some typical confusing pairs of emotions from this matrix. The classifier confuses DISGUST with ANGER and vice versa, while it confuses JOY with ANTICIPATION. These confusions conform to our expectation, since they are actually similar emotions. The classifier was less likely to confuse positive emotions (JOY and ANTICIPATION) with negative emotion (ANGER, DISGUST, FEAR, and SADNESS) vice versa. 969 Goal emotion: ANGER (predicted as SADNESS) U: 毎日通話してるなんなの羨ましいわ (You have phone calls every day, I envy you.) R: 君の方こそ誰からも電話こないから暇で羨ましいよ。 (I envy you have a lot of time ’cause no one calls you.) Goal emotion: SURPRISE (predicted as FEAR) U: 黒髪がモテるってマジか。 (Is it true that dark-haired girls are popular with boys?) R: 80%くらいの男子は黒髪が好きらしい。 (About 80% of boys seem to prefer dark-haired girls.) Table 9: Examples of utterance-response pairs to which the system predicted wrong emotions. We have briefly examined the confusions and found the two major types of errors, each of which is exemplified in Table 9. The first (top) one is sarcasm or irony, which has been reported to be difficult to capture by lexical features alone (Gonz´alezIb´a˜nez et al., 2011). The other (bottom) one is due to lack of information. In this example, only if the addressee does not know the fact provided by the response, s/he will surprise at it. 5.3 Generation task We next demonstrate the experimental results for eliciting the emotion of the addressee. We use the utterance pairs summarized in Table 6 to learn the translation models and language models for eliciting each emotional category. We also use the 640 million utterances pairs in the entire emotion-tagged corpus for learning general models. However, for learning the general translation models, we currently use 4 millions of utterance pairs sampled from the 640 millions of pairs due to the computational limitation. Automatic evaluation We first use BLEU score (Papineni et al., 2002) to perform automatic evaluation (Ritter et al., 2011). In this evaluation, the system is provided with the utterance and the goal emotion in the test data and the generated responses are evaluated through BLEU score. Specifically, we conducted two-fold cross-validation to optimize the weights of our method. We tried α and β in {0.0, 0.2, 0.4, 0.6, 0.8, 1.0} and selected the weights that achieved the best BLEU score. Note that we adopted different values of the weights for different emotional categories. Table 10 compares BLEU scores of three methods including the proposed one. The first row represents a method that does not perform model adaptation at all. It corresponds to the special case System BLEU NO ADAPTATION 0.64 PROPOSED 1.05 OPTIMAL 1.57 Table 10: Comparison of BLEU scores. (i.e., α = β = 0.0) of the proposed method. The second row represents our method, while the last row represents the result of our method when the weights are set as optimal, i.e., those achieving the best BLEU on the test data. This result can be considered as an upper bound on BLEU score. The results demonstrate that model adaptation is useful for generating the responses that elicit the goal emotion. We can clearly observe the improvement in the BLEU from 0.64 to 1.05. On the other hand, there still remains a gap between the last two rows (i.e., proposed and optimal). We think this is partly because the current test data is too small to reliably tune parameters. Human evaluation We next asked two human workers to manually evaluate the generation results. In this evaluation, the baseline (no adaptation in Table 10) and proposed method generated a response for each of the 396 utterances in the test data. For the resulting 792 utterance-response pairs, the two workers manually assessed the appropriateness of the response. Each response was judged whether it is grammatical and meaningful. If the response was regarded as so by either of the workers, it was further judged whether it elicits the goal emotion or not. To make the comparison fair, we did not expose to the workers which system generated the response. In addition, the responses generated by the two systems were presented in a random order. As the result, 147 and 157 responses of the baseline and proposed method were regarded as appropriate, i.e., ecliting the goal emotion, by either of the workers; 74 and 92 responses were regarded as appropirate by both of the workers. These results suggest the effectiveness of the proposed method. Especially, we can confirm that the proposed method can generate responses that elicit addresee’s emotion more clearly. We investigated the agreement between the two workers in this evaluation. We found that the κ coefficient is 0.59, which indicates moderate agreement. This supports the reliability of our evaluation. 970 Goal emotion: JOY Input: 写真コンクールで優秀賞とったーーーー!2 月7 日表彰式! (I wooooon the outstanding award at the photo competition! The ceremony is on Feb. 7!) NO ADAPTATION: 残念ながら1日目だけですのー(Sorry to say, only the first day.) PROPOSED: 金賞受賞おめでとう!!!おめでとうなのー (Congratulations on winning the gold prize!!! Congrats.) Goal emotion: TRUST Input: 自暴自棄になっているの?(Do you get desperate? ) NO ADAPTATION: 楽しみ!(I’m looking forward to it!) PROPOSED: きっとまだ大丈夫(Maybe still OK.) Goal emotion: ANTICIPATION Input: だよね!なんとかなるよね!ww (Huh! It’s gonna be all right! lol) あ,わたしグッズ買わなきゃなのでその時間だけは取ってくれるとうれしい(´∀`) (I gotta buy the goods, so I’ll be glad if you can take the time :-)) NO ADAPTATION: 私はグッズ買ってないから不安ですね(Since I’ve not bought it, I feel worried.) PROPOSED: いいですね!私も買いますね!!!(Good! I’ll buy it too!!!) Table 11: Examples of the responses generated by the two systems, NO ADAPTATION and PROPOSED. Examples Table 11 illustrates examples of the responses generated by the no adaptation baseline and proposed method. In the first two examples, the proposed method successfully generates responses that elicit the goal emotions: JOY and TRUST. From these examples, we can consider that the adapted model assigns large probability to phrases such as congratulations or OK. In the last example, the system also succeeded in eliciting the goal emotion: ANTICIPATION. For this example, we can interpret that the speaker of the response (i.e., the system) feels anticipation, and consequently the emotion of the addressee is affected by the emotion of the speaker (i.e., the system). Interestingly, a similar phenomenon is also observed in real conversation (Kim et al., 2012). 6 Related Work There have been a tremendous amount of studies on predicting the emotion from text or speech data (Ayadi et al., 2011; Bandyopadhyay and Okumura, 2011; Balahur et al., 2011; Balahur et al., 2012). Unlike our prediction task, most of them have exclusively focused on estimating the emotion of a speaker (or writer) from her/his utterance (or writing). Analogous to our prediction task, Lin and HsinYihn (2008) and Socher et al. (2011) investigated predicting the emotion of a reader from the text that s/he reads. Our work differs from them in that we focus on dialogue data, and we exploit features that are not available within their task settings, e.g., the addressee’s previous utterance. Tokuhisa et al. (2008) proposed a method for extracting pairs of an event (e.g., It rained suddenly when I went to see the cherry blossoms) and an emotion elicited by it (e.g., SADNESS) from the Web text. The extracted data are used for emotion classification. A similar technique would be useful for prediction the emotion of an addressee as well. Response generation has a long research history (Weizenbaum, 1966), although it is only very recently that a fully statistical approach was introduced in this field (Ritter et al., 2011). At this moment, we are unaware of any statistical response generators that model the emotion of the user. Some researchers have explored generating jokes or humorous text (Dybala et al., 2010; Labtov and Lipson, 2012). Those attempts are similar to our work in that they also aim at eliciting a certain emotion in the addressee. They are, however, restricted to elicit a specific emotion. The linear interpolation of translation and/or language models is a widely-used technique for adapting machine translation systems to new domains (Sennrich, 2012). However, it has not been touched in the context of response generation. 7 Conclusion and Future Work In this paper, we have explored predicting and eliciting the emotion of an addressee by using a large amount of dialogue data obtained from microblog posts. In the first attempt to model the emotion of an addressee in the field of NLP, we demonstrated that the response of the dialogue partner and the previous utterance of the addressee are useful for predicting the emotion. In the generation task, on the other hand, we showed that the 971 model adaptation approach successfully generates the responses that elicit the goal emotion. For future work, we want to use longer dialogue history in both tasks. While we considered only two utterances as a history, a longer history would be helpful. We also plan to personalize the proposed methods, exploiting microblog posts made by users of a certain age, gender, occupation, or even character to perform model adaptation. Acknowledgment This work was supported by the FIRST program of JSPS. The authors thank the anonymous reviewers for their valuable comments. The authors also thank the student annotators for their hard work. References Moataz El Ayadi, Mohamed S. Kamel, and Fakhri Karray. 2011. Survey on speech emotion recognition: Features, classification schemes, and databases. Pattern Recognition, 44:572–587. Alexandra Balahur, Ester Boldrini, Andres Montoyo, and Patricio Martinez-Barco, editors. 2011. Proceedings of the 2nd Workshop on Computational Approaches to Subjectivity and Sentiment Analysis. Association for Computational Linguistics. Alexandra Balahur, Andres Montoyo, Patricio Martinez Barco, and Ester Boldrini, editors. 2012. Proceedings of the 3rd Workshop on Computational Approaches to Subjectivity and Sentiment Analysis. Association for Computational Linguistics. Sivaji Bandyopadhyay and Manabu Okumura, editors. 2011. Proceedings of the Workshop on Sentiment Analysis where AI meets Psychology. Asian Federation of Natural Language Processing. Pawel Dybala, Michal Ptaszynski, Jacek Maciejewski, Mizuki Takahashi, Rafal Rzepka, and Kenji Araki. 2010. Multiagent system for joke generation: Humor and emotions combined in human-agent conversation. Journal of Ambient Intelligence and Smart Environments, 2(1):31–48. Kate Forbes-Riley and Diane J. Litman. 2004. Predicting emotion in spoken dialogue from multiple knowledge sources. In Proceedings of NAACL, pages 201–208. Nadia Ghamrawi and Andrew McCallum. 2005. Collective multi-label classification. In Proceedings of CIKM, pages 195–200. Roberto Gonz´alez-Ib´a˜nez, Smaranda Muresan, and Nina Wacholder. 2011. Identifying sarcasm in twitter: a closer look. In Proceedings of ACL, pages 581–586. Jon Hasselgren, Erik Montnemery, Pierre Nugues, and Markus Svensson. 2003. HMS: A predictive text entry method using bigrams. In Proceedings of EACL Workshop on Language Modeling for Text Entry Methods, pages 43–50. Suin Kim, JinYeong Bak, and Alice Haeyun Oh. 2012. Do you feel what I feel? social aspects of emotions in Twitter conversations. In Proceedings of ICWSM, pages 495–498. Igor Labtov and Hod Lipson. 2012. Humor as circuits in semantic networks. In Proceedings of ACL (Short Papers), pages 150–155. Kevin Lin and Hsin-Hsi Hsin-Yihn. 2008. Ranking reader emotions using pairwise loss minimization and emotional distribution regression. In Proceedings of EMNLP, pages 136–144. Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of ACL, pages 160–167. Bo Pang and Sujith Ravi. 2012. Revisiting the predictability of language: Response completion in social media. In Proceedings of EMNLP, pages 1489– 1499. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of ACL, pages 311–318. Robert Plutchik. 1980. A general psychoevolutionary theory of emotion. In Emotion: Theory, research, and experience: Vol. 1. Theories of emotion, pages 3–33. New York: Academic. Alan Ritter, Colin Cherry, and William B. Dolan. 2011. Data-driven response generation in social media. In Proceedings of EMNLP, pages 583–593. Rico Sennrich. 2012. Perplexity minimization for translation model domain adaptation in statistical machine translation. In Proceedings of EACL, pages 539–549. Richard Socher, Jeffrey Pennington, Eric H. Huang, Andrew Y. Ng, and Christopher D. Manning. 2011. Semi-supervised recursive autoencoders for predicting sentiment distributions. In Proceedings of EMNLP, pages 151–161. Ellen Spertus. 1997. Smokey: Automatic recognition of hostile messages. In Proceedings of IAAI, pages 1058–1065. Ryoko Tokuhisa, Kentaro Inui, and Yuji Matsumoto. 2008. Emotion classification using massive examples extracted from the Web. In Proceedings of COLING, pages 881–888. Joseph Weizenbaum. 1966. ELIZA — a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1):36–45. 972
2013
95
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 973–982, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Utterance-Level Multimodal Sentiment Analysis Ver´onica P´erez-Rosas and Rada Mihalcea Computer Science and Engineering University of North Texas [email protected], [email protected] Louis-Philippe Morency Institute for Creative Technologies University of Southern California [email protected] Abstract During real-life interactions, people are naturally gesturing and modulating their voice to emphasize specific points or to express their emotions. With the recent growth of social websites such as YouTube, Facebook, and Amazon, video reviews are emerging as a new source of multimodal and natural opinions that has been left almost untapped by automatic opinion analysis techniques. This paper presents a method for multimodal sentiment classification, which can identify the sentiment expressed in utterance-level visual datastreams. Using a new multimodal dataset consisting of sentiment annotated utterances extracted from video reviews, we show that multimodal sentiment analysis can be effectively performed, and that the joint use of visual, acoustic, and linguistic modalities can lead to error rate reductions of up to 10.5% as compared to the best performing individual modality. 1 Introduction Video reviews represent a growing source of consumer information that gained increasing interest from companies, researchers, and consumers. Popular web platforms such as YouTube, Amazon, Facebook, and ExpoTV have reported a significant increase in the number of consumer reviews in video format over the past five years. Compared to traditional text reviews, video reviews provide a more natural experience as they allow the viewer to better sense the reviewer’s emotions, beliefs, and intentions through richer channels such as intonations, facial expressions, and body language. Much of the work to date on opinion analysis has focused on textual data, and a number of resources have been created including lexicons (Wiebe and Riloff, 2005; Esuli and Sebastiani, 2006) or large annotated datasets (Maas et al., 2011). Given the accelerated growth of other media on the Web and elsewhere, which includes massive collections of videos (e.g., YouTube, Vimeo, VideoLectures), images (e.g., Flickr, Picasa), audio clips (e.g., podcasts), the ability to address the identification of opinions in the presence of diverse modalities is becoming increasingly important. This has motivated researchers to start exploring multimodal clues for the detection of sentiment and emotions in video content (Morency et al., 2011; Wagner et al., 2011). In this paper, we explore the addition of speech and visual modalities to text analysis in order to identify the sentiment expressed in video reviews. Given the non homogeneous nature of full-video reviews, which typically include a mixture of positive, negative, and neutral statements, we decided to perform our experiments and analyses at the utterance level. This is in line with earlier work on text-based sentiment analysis, where it has been observed that full-document reviews often contain both positive and negative comments, which led to a number of methods addressing opinion analysis at sentence level. Our results show that relying on the joint use of linguistic, acoustic, and visual modalities allows us to better sense the sentiment being expressed as compared to the use of only one modality at a time. Another important aspect of this paper is the introduction of a new multimodal opinion database annotated at the utterance level which is, to our knowledge, the first of its kind. In our work, this dataset enabled a wide range of multimodal sentiment analysis experiments, addressing the relative importance of modalities and individual features. The following section presents related work in text-based sentiment analysis and audio-visual emotion recognition. Section 3 describes our new multimodal datasets with utterance-level sentiment annotations. Section 4 presents our multimodal sen973 timent analysis approach, including details about our linguistic, acoustic, and visual features. Our experiments and results on multimodal sentiment classification are presented in Section 5, with a detailed discussion and analysis in Section 6. 2 Related Work In this section we provide a brief overview of related work in text-based sentiment analysis, as well as audio-visual emotion analysis. 2.1 Text-based Subjectivity and Sentiment Analysis The techniques developed so far for subjectivity and sentiment analysis have focused primarily on the processing of text, and consist of either rulebased classifiers that make use of opinion lexicons, or data-driven methods that assume the availability of a large dataset annotated for polarity. These tools and resources have been already used in a large number of applications, including expressive textto-speech synthesis (Alm et al., 2005), tracking sentiment timelines in on-line forums and news (Balog et al., 2006), analysis of political debates (Carvalho et al., 2011), question answering (Oh et al., 2012), conversation summarization (Carenini et al., 2008), and citation sentiment detection (Athar and Teufel, 2012). One of the first lexicons used in sentiment analysis is the General Inquirer (Stone, 1968). Since then, many methods have been developed to automatically identify opinion words and their polarity (Hatzivassiloglou and McKeown, 1997; Turney, 2002; Hu and Liu, 2004; Taboada et al., 2011), as well as n-gram and more linguistically complex phrases (Yang and Cardie, 2012). For data-driven methods, one of the most widely used datasets is the MPQA corpus (Wiebe et al., 2005), which is a collection of news articles manually annotated for opinions. Other datasets are also available, including two polarity datasets consisting of movie reviews (Pang and Lee, 2004; Maas et al., 2011), and a collection of newspaper headlines annotated for polarity (Strapparava and Mihalcea, 2007). While difficult problems such as cross-domain (Blitzer et al., 2007; Li et al., 2012) or crosslanguage (Mihalcea et al., 2007; Wan, 2009; Meng et al., 2012) portability have been addressed, not much has been done in terms of extending the applicability of sentiment analysis to other modalities, such as speech or facial expressions. The only exceptions that we are aware of are the findings reported in (Somasundaran et al., 2006; Raaijmakers et al., 2008; Mairesse et al., 2012; Metze et al., 2009), where speech and text have been analyzed jointly for the purpose of subjectivity or sentiment identification, without, however, addressing other modalities such as visual cues; and the work reported in (Morency et al., 2011; Perez-Rosas et al., 2013), where multimodal cues have been used for the analysis of sentiment in product reviews, but where the analysis was done at the much coarser level of full videos rather than individual utterances as we do in our work. 2.2 Audio-Visual Emotion Analysis. Also related to our work is the research done on emotion analysis. Emotion analysis of speech signals aims to identify the emotional or physical states of a person by analyzing his or her voice (Ververidis and Kotropoulos, 2006). Proposed methods for emotion recognition from speech focus both on what is being said and how is being said, and rely mainly on the analysis of the speech signal by sampling the content at utterance or frame level (Bitouk et al., 2010). Several researchers used prosody (e.g., pitch, speaking rate, Mel frequency coefficients) for speech-based emotion recognition (Polzin and Waibel, 1996; Tato et al., 2002; Ayadi et al., 2011). There are also studies that analyzed the visual cues, such as facial expressions and body movements (Calder et al., 2001; Rosenblum et al., 1996; Essa and Pentland, 1997). Facial expressions are among the most powerful and natural means for human beings to communicate their emotions and intentions (Tian et al., 2001). Emotions can be also expressed unconsciously, through subtle movements of facial muscles such as smiling or eyebrow raising, often measured and described using the Facial Action Coding System (FACS) (Ekman et al., 2002). De Silva et. al. (De Silva et al., 1997) and Chen et. al. (Chen et al., 1998) presented one of the early works that integrate both acoustic and visual information for emotion recognition. In addition to work that considered individual modalities, there is also a growing body of work concerned with multimodal emotion analysis (Silva et al., 1997; Sebe et al., 2006; Zhihong et al., 2009; Wollmer et al., 2010). 974 Utterance transcription Label En este color, creo que era el color frambuesa. neu In this color, I think it was raspberry Pinta hermosisimo. pos It looks beautiful. Sinceramente, con respecto a lo que pinta y a que son hidratante, si son muy hidratantes. pos Honestly, talking about how they looks and hydrates, yes they are very hydrant. Pero el problema de estos labiales es que cuando uno se los aplica, te dejan un gusto asqueroso en la boca. neg But the problem with those lipsticks is that when you apply them, they leave a very nasty taste Sinceramente, es no es que sea el olor sino que es mas bien el gusto. neg Honestly, is not the smell, it is the taste. Table 1: Sample utterance-level annotations. The labels used are: pos(itive), neg(ative), neu(tral). More recently, two challenges have been organized focusing on the recognition of emotions using audio and visual cues (Schuller et al., 2011a; Schuller et al., 2011b), which included subchallenges on audio-only, video-only, and audiovideo, and drew the participation of many teams from around the world. Note however that most of the previous work on audio-visual emotion analysis has focused exclusively on the audio and video modalities, and did not consider textual features, as we do in our work. 3 MOUD: Multimodal Opinion Utterances Dataset For our experiments, we created a dataset of utterances (named MOUD) containing product opinions expressed in Spanish.1 We chose to work with Spanish because it is a widely used language, and it is the native language of the main author of this paper. We started by collecting a set of videos from the social media web site YouTube, using several keywords likely to lead to a product review or recommendation. Starting with the YouTube search page, videos were found using the following keywords: mis products favoritos (my favorite products), products que no recomiendo (non recommended products), mis perfumes favoritos (my favorite perfumes), peliculas recomendadas (recommended movies), peliculas que no recomiendo (non recommended movies) and libros recomendados (recommended books), libros que no recomiendo (non recommended books). Notice that the keywords are not targeted at a specific product type; rather, we used a variety of product names, so that the dataset has some degree of generality within the broad domain of product reviews. 1Publicly available from the authors webpage. Among all the videos returned by the YouTube search, we selected only videos that respected the following guidelines: the speaker should be in front of the camera; her face should be clearly visible, with a minimum amount of face occlusion during the recording; there should not be any background music or animation. The final video set includes 80 videos randomly selected from the videos retrieved from YouTube that also met the guidelines above. The dataset includes 15 male and 65 female speakers, with their age approximately ranging from 20 to 60 years. All the videos were first pre-processed to eliminate introductory titles and advertisements. Since the reviewers often switched topics when expressing their opinions, we manually selected a 30 seconds opinion segment from each video to avoid having multiple topics in a single review. 3.1 Segmentation and Transcription All the video clips were manually processed to transcribe the verbal statements and also to extract the start and end time of each utterance. Since the reviewers utter expressive sentences that are naturally segmented by speech pauses, we decided to use these pauses (>0.5seconds) to identify the beginning and the end of each utterance. The transcription and segmentation were performed using the Transcriber software. Each video was segmented into an average of six utterances, resulting in a final dataset of 498 utterances. Each utterance is linked to the corresponding audio and video stream, as well as its manual transcription. The utterances have an average duration of 5 seconds, with a standard deviation of 1.2 seconds. 975 Figure 1: Multimodal feature extraction 3.2 Sentiment Annotation To enable the use of this dataset for sentiment detection, we performed sentiment annotations at utterance level. Annotations were done using Elan,2 which is a widely used tool for the annotation of video and audio resources. Two annotators independently labeled each utterance as positive, negative, or neutral. The annotation was done after seeing the video corresponding to an utterance (along with the corresponding audio source). The transcription of the utterance was also made available. Thus, the annotation process included all three modalities: visual, acoustic, and linguistic. The annotators were allowed to watch the video segment and their corresponding transcription as many times as needed. The inter-annotator agreement was measured at 88%, with a Kappa of 0.81, which represents good agreement. All the disagreements were reconciled through discussions. Table 1 shows the five utterances obtained from a video in our dataset, along with their corresponding 2http://tla.mpi.nl/tools/tla-tools/elan/ sentiment annotations. As this example illustrates, a video can contain a mix of positive, negative, and neutral utterances. Note also that sentiment is not always explicit in the text: for example, the last utterance “Honestly, it is not the smell, it is the taste” has an implicit reference to the “nasty taste” expressed in the previous utterance, and thus it was also labeled as negative by both annotators. 4 Multimodal Sentiment Analysis The main advantage that comes with the analysis of video opinions, as compared to their textual counterparts, is the availability of visual and speech cues. In textual opinions, the only source of information consists of words and their dependencies, which may sometime prove insufficient to convey the exact sentiment of the user. Instead, video opinions naturally contain multiple modalities, consisting of visual, acoustic, and linguistic datastreams. We hypothesize that the simultaneous use of these three modalities will help create a better opinion analysis model. 976 4.1 Feature Extraction This section describes the process of automatically extracting linguistic, acoustic and visual features from the video reviews. First, we obtain the stream corresponding to each modality, followed by the extraction of a representative set of features for each modality, as described in the following subsections. These features are then used as cues to build a classifier of positive or negative sentiment. Figure 1 illustrates this process. 4.1.1 Linguistic Features We use a bag-of-words representation of the video transcriptions of each utterance to derive unigram counts, which are then used as linguistic features. First, we build a vocabulary consisting of all the words, including stopwords, occurring in the transcriptions of the training set. We then remove those words that have a frequency below 10 (value determined empirically on a small development set). The remaining words represent the unigram features, which are then associated with a value corresponding to the frequency of the unigram inside each utterance transcription. These simple weighted unigram features have been successfully used in the past to build sentiment classifiers on text, and in conjunction with Support Vector Machines (SVM) have been shown to lead to state-ofthe-art performance (Maas et al., 2011). 4.1.2 Acoustic Features Acoustic features are automatically extracted from the speech signal of each utterance. We used the open source software OpenEAR (Schuller, 2009) to automatically compute a set of acoustic features. We include prosody, energy, voicing probabilities, spectrum, and cepstral features. • Prosody features. These include intensity, loudness, and pitch that describe the speech signal in terms of amplitude and frequency. • Energy features. These features describe the human loudness perception. • Voice probabilities. These are probabilities that represent an estimate of the percentage of voiced and unvoiced energy in the speech. • Spectral features. The spectral features are based on the characteristics of the human ear, which uses a nonlinear frequency unit to simulate the human auditory system. These features describe the speech formants, which model spoken content and represent speaker characteristics. • Cepstral features. These features emphasize changes or periodicity in the spectrum features measured by frequencies; we model them using 12 Mel-frequency cepstral coefficients that are calculated based on the Fourier transform of a speech frame. Overall, we have a set of 28 acoustic features. During the feature extraction, we use a frame sampling of 25ms. Speaker normalization is performed using z-standardization. The voice intensity is thresholded to identify samples with and without speech, with the same threshold being used for all the experiments and all the speakers. The features are averaged over all the frames in an utterance, to obtain one feature vector for each utterance. 4.1.3 Facial Features Facial expressions can provide important clues for affect recognition, which we use to complement the linguistic and acoustic features extracted from the speech stream. The most widely used system for measuring and describing facial behaviors is the Facial Action Coding System (FACS), which allows for the description of face muscle activities through the use of a set of Action Units (AUs). According with (Ekman, 1993), there are 64 AUs that involve the upper and lower face, including several face positions and movements.3 AUs can occur either by themselves or in combination, and can be used to identify a variety of emotions. While AUs are frequently annotated by certified human annotators, automatic tools are also available. In our work, we use the Computer Expression Recognition Toolbox (CERT) (Littlewort et al., 2011), which allows us to automatically extract the following visual features: • Smile and head pose estimates. The smile feature is an estimate for smiles. Head pose detection consists of three-dimensional estimates of the head orientation, i.e., yaw, pitch, and roll. These features provide information about changes in smiles and face positions while uttering positive and negative opinions. • Facial AUs. These features are the raw estimates for 30 facial AUs related to muscle movements for the eyes, eyebrows, nose, lips, 3http://www.cs.cmu.edu/afs/cs/project/face/www/facs.htm 977 and chin. They provide detailed information about facial behaviors from which we expect to find differences between positive and negative states. • Eight basic emotions. These are estimates for the following emotions: anger, contempt, disgust, fear, joy, sad, surprise, and neutral. These features describe the presence of two or more AUs that define a specific emotion. For example, the unit A12 describes the pulling of lip corners movement, which usually suggests a smile but when associated with a check raiser movement (unit A6), represents a marker for the emotion of happiness. We extract a total of 40 visual features, each of them obtained at frame level. Since only one person is present in each video clip, most of the time facing the camera, the facial tracking was successfully applied for most of our data. For the analysis, we use a sampling rate of 30 frames per second. The features extracted for each utterance are averaged over all the valid frames, which are automatically identified using the output of CERT.4 Segments with more than 60% of invalid frames are simply discarded. 5 Experiments and Results We run our sentiment classification experiments on the MOUD dataset introduced earlier. From the dataset, we remove utterances labeled as neutral, thus keeping only the positive and negative utterances with valid visual features. The removal of neutral utterances is done for two main reasons. First, the number of neutral utterances in the dataset is rather small. Second, previous work in subjectivity and sentiment analysis has demonstrated that a layered approach (where neutral statements are first separated from opinion statements followed by a separation between positive and negative statements) works better than a single three-way classification. After this process, we are left with an experimental dataset of 412 utterances, 182 of which are labeled as positive, and 231 are labeled as negative. From each utterance, we extract the linguistic, acoustic, and visual features described above, which are then combined using the early fusion (or feature-level fusion) approach (Hall and Llinas, 4There is a small number of frames that CERT could not process, mostly due to the brief occlusions that occur when the speaker is showing the product she is reviewing. Modality Accuracy Baseline 55.93% One modality at a time Linguistic 70.94% Acoustic 64.85% Visual 67.31% Two modalities at a time Linguistic + Acoustic 72.88% Linguistic + Visual 72.39% Acoustic + Visual 68.86% Three modalities at a time Linguistic+Acoustic+Visual 74.09% Table 2: Utterance-level sentiment classification with linguistic, acoustic, and visual features. 1997; Atrey et al., 2010). In this approach, the features collected from all the multimodal streams are combined into a single feature vector, thus resulting in one vector for each utterance in the dataset which is used to make a decision about the sentiment orientation of the utterance. We run several comparative experiments, using one, two, and three modalities at a time. We use the entire set of 412 utterances and run ten fold cross validations using an SVM classifier, as implemented in the Weka toolkit.5 In line with previous work on emotion recognition in speech (Haq and Jackson, 2009; Anagnostopoulos and Vovoli, 2010) where utterances are selected in a speaker dependent manner (i.e., utterances from the same speaker are included in both training and test), as well as work on sentence-level opinion classification where document boundaries are not considered in the split performed between the training and test sets (Wilson et al., 2004; Wiegand and Klakow, 2009), the training/test split for each fold is performed at utterance level regardless of the video they belong to. Table 2 shows the results of the utterance-level sentiment classification experiments. The baseline is obtained using the ZeroR classifier, which assigns the most frequent label by default, averaged over the ten folds. 6 Discussion The experimental results show that sentiment classification can be effectively performed on multimodal datastreams. Moreover, the integration of 5http://www.cs.waikato.ac.nz/ml/weka/ 978 Figure 2: Visual and acoustic feature weights. This graph shows the relative importance of the information gain weights associated with the top most informative acoustic-visual features. visual, acoustic, and linguistic features can improve significantly over the use of one modality at a time, with incremental improvements observed for each added modality. Among the individual classifiers, the linguistic classifier appears to be the most accurate, followed by the classifier that relies on visual clues, and by the audio classifier. Compared to the best individual classifier, the relative error rate reduction obtained with the tri-modal classifier is 10.5%. The results obtained with this multimodal utterance classifier are found to be significantly better than the best individual results (obtained with the text modality), with significance being tested with a t-test (p=0.05). Feature analysis. To determine the role played by each of the visual and acoustic features, we compare the feature weights assigned by the learning algorithm, as shown in Figure 2. Interestingly, a distressed brow is the strongest indicator of sentiment, followed, this time not surprisingly, by the smile feature. Other informative features for sentiment classification are the voice probability, representing the energy in speech, the combined visual features that represent an angry face, and two of the cepstral coefficients. To reach a better understanding of the relation between features, we also calculate the Pearson correlation between the visual and acoustic features. Table 3 shows a subset of these correlation figures. As we expected, correlations between features of the same type are higher. For example, the correlation between features AU6 and AU12 or the correlation between intensity and loudness is higher than the correlation between AU6 and intensity. Nonetheless, we still find some significant correlations between features of different types, for instance AU12 and AU45 which are both significantly correlated with the intensity and loudness features. This give us confidence about using them for further analysis. Video-level sentiment analysis. To understand the role played by the size of the video-segments considered in the sentiment classification experiments, as well as the potential effect of a speaker-independence assumption, we also run a set of experiments where we use full videos for the classification. In these experiments, once again the sentiment annotation is done by two independent annotators, using the same protocol as in the utterance-based annotations. Videos that were ambivalent about the general sentiment were either labeled as neutral (and thus removed from the experiments), or labeled with the dominant sentiment. The interannotator agreement for this annotation was measured at 96.1%. As before, the linguistic, acoustic, and visual features are averaged over the entire video, and we use an SVM classifier in ten-fold cross validation experiments. Table 4 shows the results obtained in these video-level experiments. While the combination of modalities still helps, the improvement is smaller than the one obtained during the utterance-level classification. Specifically, the combined effect of acoustic and visual features improves significantly over the individual modalities. However, the combination of linguistic features with other modalities does not lead to clear improvements. This may be due to the smaller number of feature vectors used in the experiments (only 80, as compared to the 412 used in the previous setup). Another possible reason is the fact that the acoustic and visual modalities are significantly weaker than the linguistic modality, most likely due to the fact that the feature vectors are now speaker-independent, which makes it harder to improve over the linguistic modality alone. 7 Conclusions In this paper, we presented a multimodal approach for utterance-level sentiment classification. We introduced a new multimodal dataset consisting 979 AU6 AU12 AU45 AUs 1,1+4 Pitch Voice probability Intensity Loudness AU6 1.00 0.46* -0.03 -0.05 0.06 -0.14* -0.04 -0.02 AU12 1.00 -0.23* -0.33* 0.04 0.05 0.15* 0.16* AU45 1.00 0.05 -0.05 -0.11* -.163* 0.16* AUs 1,1+4 1.00 -0.11* -0.16* 0.06 0.07 Pitch 1.00 -0.04 -0.01 -0.08 Voice probability 1.00 0.19* 0.38* Intensity 1.00 0.85* Loudness 1.00 Table 3: Correlations between several visual and acoustic features. Visual features: AU6 Cheek raise, AU12 Lip corner pull, AU45 Blink eye and closure, AU1,1+4 Distress brow. Acoustic features: Pitch, Voice probability, Intensity, Energy. *Correlation is significant at the 0.05 level (1-tailed) . Modality Accuracy Baseline 55.93% One modality at a time Linguistic 73.33% Acoustic 53.33% Visual 50.66% Two modalities at a time Linguistic + Acoustic 72.00% Linguistic + Visual 74.66% Acoustic + Visual 61.33% Three modalities at a time Linguistic+Acoustic+Visual 74.66% Table 4: Video-level sentiment classification with linguistic, acoustic, and visual features. of sentiment annotated utterances extracted from video reviews, where each utterance is associated with a video, acoustic, and linguistic datastream. Our experiments show that sentiment annotation of utterance-level visual datastreams can be effectively performed, and that the use of multiple modalities can lead to error rate reductions of up to 10.5% as compared to the use of one modality at a time. In future work, we plan to explore alternative multimodal fusion methods, such as decision-level and meta-level fusion, to improve the integration of the visual, acoustic, and linguistic modalities. Acknowledgments We would like to thank Alberto Castro for his help with the sentiment annotations. This material is based in part upon work supported by National Science Foundation awards #0917170 and #1118018, by DARPA-BAA-12-47 DEFT grant #12475008, and by a grant from U.S. RDECOM. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation, the Defense Advanced Research Projects Agency, or the U.S. Army Research, Development, and Engineering Command. References C. Alm, D. Roth, and R. Sproat. 2005. Emotions from text: Machine learning for text-based emotion prediction. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 347–354, Vancouver, Canada. C. Anagnostopoulos and E. Vovoli. 2010. Sound processing features for speaker-dependent and phraseindependent emotion recognition in berlin database. In Information Systems Development, pages 413– 421. Springer. A. Athar and S. Teufel. 2012. Context-enhanced citation sentiment detection. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Montr´eal, Canada, June. P. K. Atrey, M. A. Hossain, A. El Saddik, and M. Kankanhalli. 2010. Multimodal fusion for multimedia analysis: a survey. Multimedia Systems, 16. M. El Ayadi, M. Kamel, and F. Karray. 2011. Survey on speech emotion recognition: Features, classification schemes, and databases. Pattern Recognition, 44(3):572 – 587. K. Balog, G. Mishne, and M. de Rijke. 2006. Why are they excited? identifying and explaining spikes in blog mood levels. In Proceedings of the 11th Meeting of the European Chapter of the As sociation for Computational Linguistics (EACL-2006). Dmitri Bitouk, Ragini Verma, and Ani Nenkova. 2010. Class-level spectral features for emotion recognition. Speech Commun., 52(7-8):613–625, July. 980 J. Blitzer, M. Dredze, and F. Pereira. 2007. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In Association for Computational Linguistics. A. J. Calder, A. M. Burton, P. Miller, A. W. Young, and S. Akamatsu. 2001. A principal component analysis of facial expressions. Vision research, 41(9):1179– 1208, April. G. Carenini, R. Ng, and X. Zhou. 2008. Summarizing emails with conversational cohesion and subjectivity. In Proceedings of the Association for Computational Linguistics: Human Language Technologies (ACLHLT 2008), Columbus, Ohio. P. Carvalho, L. Sarmento, J. Teixeira, and M. Silva. 2011. Liars and saviors in a sentiment annotated corpus of comments to political debates. In Proceedings of the Association for Computational Linguistics (ACL 2011), Portland, OR. L. S. Chen, T. S. Huang, T. Miyasato, and R. Nakatsu. 1998. Multimodal human emotion/expression recognition. In Proceedings of the 3rd. International Conference on Face & Gesture Recognition, pages 366–, Washington, DC, USA. IEEE Computer Society. L C De Silva, T Miyasato, and R Nakatsu, 1997. Facial emotion recognition using multi-modal information, volume 1, page 397401. IEEE Signal Processing Society. P. Ekman, W. Friesen, and J. Hager. 2002. Facial action coding system. P. Ekman. 1993. Facial expression of emotion. American Psychologist, 48:384–392. I.A. Essa and A.P. Pentland. 1997. Coding, analysis, interpretation, and recognition of facial expressions. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 19(7):757 –763, jul. A. Esuli and F. Sebastiani. 2006. SentiWordNet: A publicly available lexical resource for opinion mining. In Proceedings of the 5th Conference on Language Resources and Evaluation (LREC 2006), Genova, IT. D.L. Hall and J. Llinas. 1997. An introduction to multisensor fusion. IEEE Special Issue on Data Fusion, 85(1). S. Haq and P. Jackson. 2009. Speaker-dependent audio-visual emotion recognition. In International Conference on Audio-Visual Speech Processing. V. Hatzivassiloglou and K. McKeown. 1997. Predicting the semantic orientation of adjectives. In Proceedings of the Conference of the European Chapter of the Association for Computational Linguistics, pages 174–181. M. Hu and B. Liu. 2004. Mining and summarizing customer reviews. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, Seattle, Washington. F. Li, S. J. Pan, O. Jin, Q. Yang, and X. Zhu. 2012. Cross-domain co-extraction of sentiment and topic lexicons. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, Jeju Island, Korea. G. Littlewort, J. Whitehill, Tingfan Wu, I. Fasel, M. Frank, J. Movellan, and M. Bartlett. 2011. The computer expression recognition toolbox (cert). In Automatic Face Gesture Recognition and Workshops (FG 2011), 2011 IEEE International Conference on, pages 298 –305, march. A. Maas, R. Daly, P. Pham, D. Huang, A. Ng, and C. Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the Association for Computational Linguistics (ACL 2011), Portland, OR. F. Mairesse, J. Polifroni, and G. Di Fabbrizio. 2012. Can prosody inform sentiment analysis? experiments on short spoken reviews. In Acoustics, Speech and Signal Processing (ICASSP), 2012 IEEE International Conference on, pages 5093 –5096, march. X. Meng, F. Wei, X. Liu, M. Zhou, G. Xu, and H. Wang. 2012. Cross-lingual mixture model for sentiment classification. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, Jeju Island, Korea. F. Metze, T. Polzehl, and M. Wagner. 2009. Fusion of acoustic and linguistic features for emotion detection. In Semantic Computing, 2009. ICSC ’09. IEEE International Conference on, pages 153 –160, sept. R. Mihalcea, C. Banea, and J. Wiebe. 2007. Learning multilingual subjective language via cross-lingual projections. In Proceedings of the Association for Computational Linguistics, Prague, Czech Republic. L.P. Morency, R. Mihalcea, and P. Doshi. 2011. Towards multimodal sentiment analysis: Harvesting opinions from the web. In Proceedings of the International Conference on Multimodal Computing, Alicante, Spain. J. Oh, K. Torisawa, C. Hashimoto, T. Kawada, S. De Saeger, J. Kazama, and Y. Wang. 2012. Why question answering using sentiment analysis and word classes. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, Jeju Island, Korea. B. Pang and L. Lee. 2004. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In Proceedings of the 42nd Meeting of the Association for Computational Linguistics, Barcelona, Spain, July. 981 V. Perez-Rosas, R. Mihalcea, and L.-P. Morency. 2013. Multimodal sentiment analysis of spanish online videos. IEEE Intelligent Systems. T. Polzin and A. Waibel. 1996. Recognizing emotions in speech. In In ICSLP. S. Raaijmakers, K. Truong, and T. Wilson. 2008. Multimodal subjectivity analysis of multiparty conversation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 466–474, Honolulu, Hawaii. M. Rosenblum, Y. Yacoob, and L.S. Davis. 1996. Human expression recognition from motion using a radial basis function network architecture. Neural Networks, IEEE Transactions on, 7(5):1121 –1138, sep. B. Schuller, M. Valstar, R. Cowie, and M. Pantic, editors. 2011a. Audio/Visual Emotion Challenge and Workshop (AVEC 2011). B. Schuller, M. Valstar, F. Eyben, R. Cowie, and M. Pantic, editors. 2011b. Audio/Visual Emotion Challenge and Workshop (AVEC 2011). F. Eyben M. Wollmer B. Schuller. 2009. Openear introducing the munich open-source emotion and affect recognition toolkit. In ACII. N. Sebe, I. Cohen, T. Gevers, and T.S. Huang. 2006. Emotion recognition based on joint visual and audio cues. In ICPR. D. Silva, T. Miyasato, and R. Nakatsu. 1997. Facial emotion recognition using multi-modal information. In Proceedings of the International Conference on Information and Communications Security. S. Somasundaran, J. Wiebe, P. Hoffmann, and D. Litman. 2006. Manual annotation of opinion categories in meetings. In Proceedings of the Workshop on Frontiers in Linguistically Annotated Corpora 2006. P. Stone. 1968. General Inquirer: Computer Approach to Content Analysis. MIT Press. C. Strapparava and R. Mihalcea. 2007. Semeval-2007 task 14: Affective text. In Proceedings of the 4th International Workshop on the Semantic Evaluations (SemEval 2007), Prague, Czech Republic. M. Taboada, J. Brooke, M. Tofiloski, K. Voli, and M. Stede. 2011. Lexicon-based methods for sentiment analysis. Computational Linguistics, 37(3). R. Tato, R. Santos, R. Kompe, and J. M. Pardo. 2002. Emotional space improves emotion recognition. In In Proc. ICSLP 2002, pages 2029–2032. Y.-I. Tian, T. Kanade, and J.F. Cohn. 2001. Recognizing action units for facial expression analysis. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 23(2):97 –115, feb. P. Turney. 2002. Thumbs up or thumbs down? semantic orientation applied to unsupervised classification of reviews. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL 2002), pages 417–424, Philadelphia. D. Ververidis and C. Kotropoulos. 2006. Emotional speech recognition: Resources, features, and methods. Speech Communication, 48(9):1162–1181, September. J. Wagner, E. Andre, F. Lingenfelser, and Jonghwa Kim. 2011. Exploring fusion methods for multimodal emotion recognition with missing data. Affective Computing, IEEE Transactions on, 2(4):206 –218, oct.-dec. X. Wan. 2009. Co-training for cross-lingual sentiment classification. In Proceedings of the Joint Conference of the Association of Computational Linguistics and the International Joint Conference on Natural Language Processing, Singapore, August. J. Wiebe and E. Riloff. 2005. Creating subjective and objective sentence classifiers from unannotated texts. In Proceedings of the 6th International Conference on Intelligent Text Processing and Computational Linguistics (CICLing-2005) (invited paper), Mexico City, Mexico. J. Wiebe, T. Wilson, and C. Cardie. 2005. Annotating expressions of opinions and emotions in language. Language Resources and Evaluation, 39(2-3):165– 210. M. Wiegand and D. Klakow. 2009. The role of knowledge-based features in polarity classification at sentence level. In Proceedings of the International Conference of the Florida Artificial Intelligence Research Society. T. Wilson, J. Wiebe, and R. Hwa. 2004. Just how mad are you? finding strong and weak opinion clauses. In Proceedings of the American Association for Artificial Intelligence. M. Wollmer, B. Schuller, F. Eyben, and G. Rigoll. 2010. Combining long short-term memory and dynamic bayesian networks for incremental emotionsensitive artificial listening. IEEE Journal of Selected Topics in Signal Processing, 4(5), October. B. Yang and C. Cardie. 2012. Extracting opinion expressions with semi-markov conditional random fields. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, Jeju Island, Korea. Z. Zhihong, M. Pantic G.I. Roisman, and T.S. Huang. 2009. A survey of affect recognition methods: Audio, visual, and spontaneous expressions. PAMI, 31(1). 982
2013
96
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 983–992, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Probabilistic Sense Sentiment Similarity through Hidden Emotions Mitra Mohtarami1, Man Lan2, and Chew Lim Tan1 1Department of Computer Science, National University of Singapore; 2Department of Computer Science, East China Normal University {mitra,tancl}@comp.nus.edu.sg;[email protected] Abstract Sentiment Similarity of word pairs reflects the distance between the words regarding their underlying sentiments. This paper aims to infer the sentiment similarity between word pairs with respect to their senses. To achieve this aim, we propose a probabilistic emotionbased approach that is built on a hidden emotional model. The model aims to predict a vector of basic human emotions for each sense of the words. The resultant emotional vectors are then employed to infer the sentiment similarity of word pairs. We apply the proposed approach to address two main NLP tasks, namely, Indirect yes/no Question Answer Pairs inference and Sentiment Orientation prediction. Extensive experiments demonstrate the effectiveness of the proposed approach. 1 Introduction Sentiment similarity reflects the distance between words based on their underlying sentiments. Semantic similarity measures such as Latent Semantic Analysis (LSA) (Landauer et al., 1998) can effectively capture the similarity between semantically related words like "car" and "automobile", but they are less effective in relating words with similar sentiment orientation like "excellent" and "superior". For example, the following relations show the semantic similarity between some sentiment words computed by LSA: :   ,   = 0.40 <   ,  = 0.46 <  ,   = 0.65 Clearly, the sentiment similarity between the above words should be in the reversed order. In fact, the sentiment intensity in "excellent" is closer to "superior" than "good". Furthermore, sentiment similarity between "good" and "bad" should be 0. In this paper, we propose a probabilistic approach to detect the sentiment similarity of words regarding their senses and underlying sentiments. For this purpose, we propose to model the hidden emotions of word senses. We show that our approach effectively outperforms the semantic similarity measures in two NLP tasks: Indirect yes/no Question Answer Pairs (IQAPs) Inference and Sentiment Orientation (SO) prediction that are described as follows: In IQAPs, answers do not explicitly contain the yes or no keywords, but rather provide context information to infer the yes or no answer (e.g. Q: Was she the best one on that old show? A: She was simply funny). Clearly, the sentiment words in IQAPs are the pivots to infer the yes or no answers. We show that sentiment similarity between such words (e.g., here the adjectives best and Funny) can be used effectively to infer the answers. The second application (SO prediction) aims to determine the sentiment orientation of individual words. Previous research utilized the semantic relations between words obtained from WordNet (Hassan and Radev, 2010) and semantic similarity measures (e.g. Turney and Littman, 2003) for this purpose. In this paper, we show that sentiment similarity between word pairs can be effectively utilized to compute SO of words. The contributions of this paper are follows: • We propose an effective approach to predict the sentiment similarity between word pairs through hidden emotions at the sense level, • We show the utility of sentiment similarity prediction in IQAP inference and SO prediction tasks, and • Our hidden emotional model can infer the type and number of hidden emotions in a corpus. 983 2 Sentiment Similarity through Hidden Emotions As we discussed above, semantic similarity measures are less effective to infer sentiment similarity between word pairs. In addition, different senses of sentiment words carry different human emotions. In fact, a sentiment word can be represented as a vector of emotions with intensity values from "very weak" to "very strong". For example, Table 1 shows several sentiment words and their corresponding emotion vectors based the following set of emotions: e = [anger, disgust, sadness, fear, guilt, interest, joy, shame, surprise]. For example, "deceive" has 0.4 and 0.5 intensity values with respect to the emotions "disgust" and "sadness" with an overall -0.9 (i.e. -0.4-0.5) value for sentiment orientation (Neviarouskaya et al., 2007; Neviarouskaya et al., 2009). Word Emotional Vector SO e = [anger, disgust, sadness, fear, guilt, interest, joy, shame, surprise] Rude ['0.2', '0.4',0,0,0,0,0,0,0] -0.6 doleful [0, 0, '0.4',0,0,0,0,0,0] -0.4 smashed [0,0, '0.8', '0.6',0,0,0,0,0] -1.4 shamefully [0,0,0,0,0,0,0, '0.7',0] -0.7 deceive [0, '0.4', '0.5',0,0,0,0,0,0] -0.9 Table 1. Sample of emotional vectors The difficulty of the sentiment similarity prediction task is evident when terms carry different types of emotions. For instance, all the words in Table 1 have negative sentiment orientation, but, they carry different emotions with different emotion vectors. For example, "rude" reflects the emotions "anger" and "disgust", while the word "doleful" only reflects the emotion "sadness". As such, the word "doleful" is closer to the words "smashed" and "deceive" involving the emotion "sadness" than others. We show that emotion vectors of the words can be effectively utilized to predict the sentiment similarity between them. Previous research shows little agreement about the number and types of the basic emotions (Ortony and Turner 1990; Izard 1971). Thus, we assume that the number and types of basic emotions are hidden and not pre-defined and propose a Probabilistic Sense Sentiment Similarity (PSSS) approach to extract the hidden emotions of word senses to infer their sentiment similarity. 3 Hidden Emotional Model Online review portals provide rating mechanisms (in terms of stars, e.g. 5- or 10-star rating) to al- Figure 1.The structure of PSSS model low users to attach ratings to their reviews. A rating indicates the summarized opinion of a user who ranks a product or service based on his feelings. There are various feelings and emotions behind such ratings with respect to the content of the reviews. Figure 1 shows the intermediate layer of hidden emotions behind the ratings (sentiments) assigned to the documents (reviews) containing the words. This Figure indicates the general structure of our PSSS model. It shows that hidden emotions (ei) link the rating (rj) and the documents (dk). In this Section, we aim to employ ratings and the relations among ratings, documents, and words to extract the hidden emotions. Figure 2 illustrates a simple graphical model using plate representation of Figure 1. As Figures 2 shows, the rating r from a set of ratings R= {r1,…,rp} is assigned to a hidden emotion set E={e1,…,ek}. A document d from a set of documents D= {d1,…,dN} with vocabulary set W= {w1,…,wM} is associated with the hidden emotion set. The model presented in Figure 2(a) has been explored in (Mohtarami et al., 2013) and is called Series Hidden Emotional Model (SHEM). This representation assumes that the word w is dependent to d and independent to e (we refer to this Assumption as A1). However, in reality, a word w can inherit properties (e.g., emotions) (b): Bridged model Figure 1. The structure of PSSS model (a): Series model Figure 2. Hidden emotional model 984 from the document d that contains w. Thus, we can assume that w is implicitly dependant on e. To account for this, we present Bridged Hidden Emotional Model (BHEM) shown in Figure 2(b). Our assumption, A2, in the BHEM model is as follows: w is dependent to both d and e. Considering Figure 1, we represent the entire text collection as a set of (w,d,r) in which each observation (w,d,r) is associated with a set of unobserved emotions. If we assume that the observed tuples are independently generated, the whole data set is generated based on the joint probability of the observation tuples (w,d,r) as the follows (Mohtarami et al., 2013): " = ### $%, , &',(,) ' ( ) = # ##$%, , &',(&(,) 1 ' ( ) where, P(w,d,r) is the joint probability of the tuple (w,d,r), and n(w,d,r) is the frequency of w in document d of rating r (note that n(w,d) is the term frequency of w in d and n(d,r) is one if r is assigned to d, and 0 otherwise). The joint probability for the BHEM is defined as follows considering hidden emotion e: - regarding class probability of the hidden emotion e to be assigned to the observation (w,d,r): $%, ,  = + $%, , | $  = = + $%, | $| $  - regarding assumption A2 and Bayes' Rule: = +$%|, $, $|  - using Bayes' Rule: = +$, |%$%$|  - regarding A2 and conditional independency: = +$|%$ |%$%$|  = $|% + $%| $ $|  2 In the bridged model, the joint probability does not depend on the probability P(d|e) and the probabilities P(w|e), P(e) and P(r|e) are unknown, while in the SHEM model explained in (Mohtarami et al., 2013), the joint probability does not depend on P(w|e), and probabilities P(d|e), P(e), and P(r|e) are unknown. We employ Maximum Likelihood approach to learn the probabilities and infer the possible hidden emotions. The log-likelihood of the whole data set D in Equation (1) can be defined as follows:  = ++ + %,  ,log$%, ,  3 ' ( ) Replacing P(w,d,r) by the values computed using the bridged model in Equation (2) results in:  = + + + %,  , log[$|% + $%| $ $|  ] ' ( ) 4 The above optimization problems are hard to compute due to the log of sum. Thus, Expectation-maximization (EM) is usually employed. EM consists of two following steps: 1. E-step: Calculates posterior probabilities for hidden emotions given the words, documents and ratings, and 2. M-step: Updates unknown probabilities (such as P(w|e) etc) using the posterior probabilities in the E-step. The steps of EM can be computed for BHEM model. EM of the model employs assumptions A2 and Bayes Rule and is defined as follows: E-step: $ |%, , = $| $ $%|  ∑$| $ $%|  5 M-step: $|  = ∑∑ %,  , $e|%, ,  ' ( ∑∑∑ %,  ,  $e|%, ,  ' ( ) = ∑ %, $e|%, , ' ∑∑ %, $e|%, ,  ' ) 6 $%|  = ∑∑ %,  , $e|%, ,  ( ) ∑∑∑ %,  ,$e|%, ,  ( ) ' = ∑ %, $e|%, ,  ) ∑∑ %, $e|%, ,  ) ' 7 $  = ∑∑∑ %,  , $e|%, ,  ' ( ) ∑∑∑∑ %, , $e|%, ,  ' ) ( 8 = ∑∑ %,  $e|%, ,  ' ) ∑∑∑ %,  $e|%, ,  ' ) 8 8 Note that in Equation (5), the probability P(e|w,d,r) does not depend on the document d. Also, in Equations (6)-(8) we remove the dependency on document d using the following Equation: + %,  , = %,  ( 9 where n(w,r) is the occurrence of w in all the documents in the rating r. The EM steps computed by the bridged model do not depend on the variable document d, and discard d from the model. The reason is that w bypasses d to directly associate with the hidden emotion e in Figure 2(b). 985 Similar to BHEM, the EM steps for SHEM can be computed by considering assumptions A1 and Bayes Rule as follows (Mohtarami et al., 2013): E-step: $ |%, ,  = $| $ $|  ∑$| $ $|  10 M-step: $|  = ∑∑ %,  , $e|%, ,  ' ( ∑∑∑ %,  ,  $e|%, ,  ' ( ) 11 $|  = ∑∑ %,  , $e|%, ,  ' ) ∑∑∑ %,  ,  $e|%, ,  ' ) ( 12 $  = ∑∑∑ %,  ,  $e|%, ,  ' ( ) ∑∑∑∑ %,  , $e|%, ,  ' ) ( 8 13 Finally, we construct the emotional vectors using the algorithm presented in Table 2. The algorithm employs document-rating, term-document and term-rating matrices to infer the unknown probabilities. This algorithm can be used with both bridged or series models. Our goal is to infer the emotional vector for each word w that can be obtained by the probability P(w|e). Note that, this probability can be simply computed for the SHEM model using P(d|e) as follows: $%|  = +$%|$|  ( 14 3.1 Enriching Hidden Emotional Models We enrich our emotional model by employing the requirement that the emotional vectors of two synonym words w1 and w2 should be similar. For this purpose, we utilize the semantic similarity between each two words and create an enriched matrix. Equation (15) shows how we compute this matrix. To compute the semantic similarity between word senses, we utilize their synsets as follows: %;%< = $=> %;|> %<? = 1 |> %;| + 1 |> %<| + $=%;|%<? |@A&'B| C |@A&'D| E 15 where, syn(w) is the synset of w. Let count(wi, wj) be the co-occurrence of the wi and wj, and let count(wj) be the total word count. The probability of wi given wj will then be P(wi|wj) = count(wi, wj)/ count(wj). In addition, note that employing the synset of the words help to obtain different emotional vectors for each sense of a word. The resultant enriched matrix W×W is multiplied to the inputs of our hidden model (matrices W×D or W×R. Note that this takes into account Input: Series Model: Document-Rate D×R, Term-Document W×D Bridged Model: Term-Rate W×R Output: Emotional vectors {e1, e2, …,ek} for w Algorithm: 1. Enriching hidden emotional model: Series Model: Update Term-Document W×D Bridged Model: Update Term-Rate W×R 2. Initialize unknown probabilities: Series Model: Initialize P(d|e), P(r|e), and P(e), randomly Bridged Model: Initialize P(w|e), P(r|e), and P(e) 3. while L has not converged to a pre-specified value do 4. E-step; Series Model: estimate the value of P(e|w,d,r) in Equation 10 Bridged Model: estimate the value of P(e|w,d,r) in Equation 5 5. M-step; Series Model: estimate the values of P(r|e), P(d|e), and P(e) in Equations 11-13, respectively Bridged Model: estimate the values of P(r|e), P(w|e), and P(e) in Equations 6-8, respectively 6. end while 7. If series hidden emotional model is used then 8. Infer word emotional vector: estimate P(w|e) in Equation 14. 9. End if Table 2. Constructing emotional vectors via P(w|e) the senses of the words as well. The learning step of EM is done using the updated inputs. In this case, the correlated words can inherit the properties of each other. For example, if wi does not occur in a document or rating involving another word (i.e., wj), the word wi can still be indirectly associated with the document or rating through the word wj. However, the distribution of the opinion words in documents and ratings is not uniform. This may decrease the effectiveness of the enriched matrix. The nonuniform distribution of opinion words has been also reported by Amiri et al. (2012) who showed that the positive words are frequently used in negative reviews. We also observed the same pattern in the development dataset. Figure 3 shows the overall occurrence of some positive and negative seeds in various ratings. As shown, in spite of the negative words, the positive words may frequently occur in both positive and negative documents. Such distribution of 986 Figure 3. Nonuniform distribution of opinion words positive words can mislead the enriched model. To address this issue, we measure the confidence of an opinion word in the enriched matrix as follows. K L ' = M[NO'P × "O'P −NO'R × "O'R] NO'P × "O'P + NO'R × "O'R 16 where, NO' P (NO' R) is the frequency of w in the ratings 1 to 4 (7 to 10), and "O'P ("O'R) is the total number of documents with rating 1 to 4 (7 to 10) that contain w. The confidence value of w varies from 0 to 1, and it increases if: • There is a large difference between the occurrences of w in positive and negative ratings. • There is a large number of reviews involving w in the relative ratings. To improve the efficiency of enriched matrix, the columns corresponding to each word in the matrix are multiplied by its confidence value. 4 Predicting Sentiment Similarity We utilize the approach proposed in (Mohtarami et al., 2013) to compute the sentiment similarity between two words. This approach compares the emotional vector of the given words. Let X and Y be the emotional vectors of two words. Equation (17) computes their correlation: V, W = ∑ V; −VXW; −WX & ;YZ  −1[\ 17 where, is number of emotional categories, V,] WX and [, \ are the mean and standard deviation values of ^ and _ respectively. V, W = −1 indicates that the two vectors are completely dissimilar, and V, W = 1 indicates that the vectors have perfect similarity. The approach makes use of a thresholding mechanism to estimate the proper correlation value to find sentimentally similar words. For this, as in Mohtarami et al. (2013) we utilized the antonyms of the words. We consider two words, Input: `: The adjective in the question of given IQAP. : The adjective in the answer of given IQAP. Output: answer ∈{> , ,    } Algorithm: 1. if ` or  are missing from our corpus then 2. answer=Uncertain; 3. else if `, < 0 then 4. answer=No; 5. else if `,  > 0 then 6. answer=yes; Figure 4. Sentiment similarity for IQAP inference %; and %< as similar in sentiment iff they satisfy both of the following conditions: 1. =%;,%<? > =%;,~%<?,  2. =%;,%<? > =~%;,%<? where, ~%; is antonym of %;, and =%;, %<? is obtained from Equation (17). Finally, we compute the sentiment similarity (SS) as follows: =%;,%<? = =%;,%<? −f g =%;,~%<?, =~%;,%<?h 18 Equation (18) enforces two sentimentally similar words to have weak correlation to the antonym of each others. A positive value of SS(.,.) indicates the words are sentimentally similar and a negative value shows that they are dissimilar. 5 Applications We explain our approach in utilizing sentiment similarity between words to perform IQAP inference and SO prediction tasks respectively. In IQAPs, we employ the sentiment similarity between the adjectives in questions and answers to interpret the indirect answers. Figure 4 shows the algorithm for this purpose. SS(.,.) indicates sentiment similarity computed by Equation (18). A positive SS means the words are sentimentally similar and thus the answer is yes. However, negative SS leads to a no response. In SO-prediction task, we aim to compute more accurate SO using our sentiment similarity method. Turney and Littman (2003) proposed a method in which the SO of a word is calculated based on its semantic similarity with seven positive words minus its similarity with seven negative words as shown in Figure 5. As the similarity function, A(.,.), they employed point-wise mutual information (PMI) to compute the similarity between the words. Here, we utilize the same approach, but instead of PMI we use our SS(.,.) measure as the similarity function. 987 Input: $%: seven words with positive SO i%: seven words with negative SO . ,. : similarity function, and %: a given word with unknown SO Output: sentiment orientation of w Algorithm: 1. $ = j_% = + %, % − + %, % &'l)(m n'l)(@ o'l)(m p'l)(@ Figure 5. SO based on the similarity function A(.,.) 6 Evaluation and Results 6.1 Data and Settings We used the review dataset employed by Maas et al. (2011) as the development dataset that contains movie reviews with star rating from one star (most negative) to 10 stars (most positive). We exclude the ratings 5 and 6 that are more neutral. We used this dataset to compute all the input matrices in Table 2 as well as the enriched matrix. The development dataset contains 50k movie reviews and 90k vocabulary. We also used two datasets for the evaluation purpose: the MPQA (Wilson et al., 2005) and IQAPs (Marneffe et al., 2010) datasets. The MPQA dataset is used for SO prediction experiments, while the IQAP dataset is used for the IQAP experiments. We ignored the neutral words in MPQA dataset and used the remaining 4k opinion words. Also, the IQAPs dataset (Marneffe et al., 2010) contains 125 IQAPs and their corresponding yes or no labels as the ground truth. 6.2 Experimental Results To evaluate our PSSS model, we perform experiments on the SO prediction and IQAPs inference tasks. Here, we consider six emotions for both bridged and series models. We study the effect of emotion numbers in Section 7.1. Also, we set a threshold of 0.3 for the confidence value in Equation (16), i.e. we set the confidence values smaller than the threshold to 0. We explain the effect of this parameter in Section 7.3. Evaluation of SO Prediction We evaluate the performance of our PSSS models in the SO prediction task using the algorithm explained in Figure 5 by setting our PSSS as similarity function (A). The results on SO prediction are presented in Table 3. The first and se- Method Precision Recall F1 PMI 56.20 56.36 55.01 ER 65.68 65.68 63.27 PSSS-SHEM 68.51 69.19 67.96 PSSS-BHEM 69.39 70.07 68.68 Table 3. Performance on SO prediction task cond rows present the results of our baselines, PMI (Turney and Littman, 2003) and Expected Rating (ER) (Potts, 2011) of words respectively. PMI extracts the semantic similarity between words using their co-occurrences. As Table 3 shows, it leads to poor performance. This is mainly due to the relatively small size of the development dataset which affects the quality of the co-occurrence information used by the PMI. ER computes the expected rating of a word based on the distribution of the word across rating categories. The value of ER indicates the SO of the word. As shown in the two last rows of the table, the results of PSSS approach are higher than PMI and ER. The reason is that PSSS is based on the combination between sentiment space (through using ratings, and matrices W×R in BHEM, D×R in SHEM) and semantic space (through the input W×D in SHEM and enriched matrix W×W in both hidden models). However, the PMI employs only the semantic space (i.e., the co-occurrence of the words) and ER uses occurrence of the words in rating categories. Furthermore, the PSSS model achieves higher performance with BHEM rather than SHEM. This is because the emotional vectors of the words are directly computed from the EM steps of BHEM. However, the emotional vectors of SHEM are computed after finishing the EM steps using Equation (14). This causes the SHEM model to estimate the number and type of the hidden emotions with a lower performance as compared to BHEM, although the performances of SHEM and BHEM are comparable as explained in Section 7.1. Evaluation of IQAPs Inference To apply our PSSS on IQAPs inference task, we use it as the sentiment similarity measure in the algorithm explained in Figure 4. The results are presented in Table 4. The first and second rows are baselines. The first row is the result obtained by Marneffe et al. (2010) approach. This approach is based on the similarity between the SO of the adjectives in question and answer. The second row of Table 4 show the results of using a popular semantic similarity measure, PMI, as the sentiment similarity (SS) measure in Figure 4. 988 Method Prec. Rec. F1 Marneffe et al. (2010) 60.00 60.00 60.00 PMI 60.61 58.70 59.64 PSSS-SHEM 62.55 61.75 61.71 PSSS-BHEM (w/o WSD) 65.90 66.11 63.74 SS-BHEM (with WSD) 66.95 67.15 65.66 Table 4. Performance on IQAP inference task The result shows that PMI is less effective to capture the sentiment similarity. Our PSSS approach directly infers yes or no responses using SS between the adjectives and does not require computing SO of the adjectives. In Table 4, PSSS-SHEM and PSSS-BHEM indicate the results when we use our PSSS with SHEM and BHEM respectively. Table 4 shows the effectiveness of our sentiment similarity measure. Both models improve the performance over the baselines, while the bridged model leads to higher performance than the series model. Furthermore, we employ Word Sense Disambiguation (WSD) to disambiguate the adjectives in the question and its corresponding answer. For example, Q: … Is that true? A: This is extraordinary and preposterous. In the answer, the correct sense of the extraordinary is unusual and as such answer no can be correctly inferred. In the table, (w/o WSD) is based on the first sense (most common sense) of the words, whereas (with WSD) utilizes the real sense of the words. As Table 4 shows, WSD increases the performance. WSD could have higher effect, if more IQAPs contain adjectives with senses different from the first sense. 7 Analysis and Discussions 7.1 Number and Types of Emotions In our PSSS approach, there is no limitation on the number and types of emotions as we assumed emotions are hidden. In this Section, we perform experiments to predict the number and type of hidden emotions. Figure 6 and 7 show the results of the hidden models (SHEM and BHEM) on SO prediction and IQAPs inference tasks respectively with different number of emotions. As the Figures show, in both tasks, SHEM achieved high performances with 11 emotions. However, BHEM achieved high performances with six emotions. Now, the question is which emotion number should be considered? To answer this question, we further study the results as follows. First, for SHEM, there is no significant difference between the performances with six and 11 emotions in the SO prediction task. This is the Figure 6. Performance of BHEM and SHEM on SO prediction through different #of emotions Figure 7. Performance of BHEM and SHEM on IQAPs inference through different #of emotions same for BHEM. Also, the performances of SHEM on the IQAP inference task with six and 11 emotions are comparable. However, there is a significant difference between the performances of BHEM in six and 11 emotions. So, we consider the dimension in which both hidden emotional models present a reasonable performance over both tasks. This dimension is six here. Second, as shown in the Figures 6 and 7, in contrast to BHEM, the performance of SHEM does not considerably change with different number of emotions over both tasks. This is because, in SHEM, the emotional vectors of the words are derived from the emotional vectors of the documents after the EM steps, see Equation (14). However, in BHEM, the emotional vectors are directly obtained from the EM steps. Thus, the bridged model is more sensitive than series model to the number of emotions. This could indicate that the bridged model is more accurate than the series model to estimate the number of emotions. Therefore, based on the above discussion, the estimated number of emotions is six in our development dataset. This number may vary using different development datasets. In addition to the number of emotions, their types can also be interpreted using our approach. To achieve this aim, we sort the words based on their probability values, P(w|e), with respect to 989 Figure 8. Effect of synonyms & antonyms in SO prediction task with different emotion numbers in BHEM Emotion#1 Emotion#2 Emotion#3 excellent (1) magnificently(1) blessed (1) sublime (1) affirmation (1) tremendous (2) unimpressive (1) humorlessly (1) paltry (1) humiliating (1) uncreative (1) lackluster (1) disreputable (1) villian (1) onslaught (1) ugly (1) old (1) disrupt (1) Table 5. Sample words in three emotions each emotion. Then, the type of the emotions can be interpreted by observing the top k words in each emotion. For example, Table 5 shows the top 6 words for three out of six emotions obtained for BHEM. The numbers in parentheses show the sense of the words. The corresponding emotions for these categories can be interpreted as "wonderful", "boring" and "disreputable", respectively. We also observed that, in SHEM with eleven emotion numbers, some of the emotion categories have similar top k words such that they can be merged to represent the same emotion. Thus, it indicates that the BHEM is better than SHEM to estimates the number of emotions than SHEM. 7.2 Effect of Synsets and Antonyms We show the important effect of synsets and antonyms in computing the sentiment similarity of words. For this purpose, we repeat the experiment for SO prediction by computing sentiment similarity of word pairs with and without using synonyms and antonyms. Figure 8 shows the results of obtained from BHEM. As the Figure shown, the highest performance can be achieved when synonyms and antonyms are used, while the lowest performance is obtained without using them. Note that, when the synonyms are not used, the entries of enriched matrix are computed using P(wi|wj) instead of P(syn(wi)|syn(wj)) in the Equation (15). Also, when the antonyms are not used, the Max(,) in Equation (18) is 0 and SS is computed using only correlation between words. The results show that synonyms can improve the performance. As Figure 8 shows, the two Figure 9. Effect of confidence values in SO prediction with different emotion numbers in BHEM highest performances are obtained when we use synonyms and the two lowest performances are achieved when we don't use synonyms. This is indicates that the synsets of the words can improve the quality of the enriched matrix. The results also show that the antonyms can improve the result (compare WOSynWAnt with WOSynWOAnt). However, synonyms lead to greater improvement than antonyms (compare WSynWOAnt with WOSynWAnt). 7.3 Effect of Confidence Value In Section 3.1, we defined a confidence value for each word to improve the quality of the enriched matrix. To illustrate the utility of the confidence value, we repeat the experiment for SO prediction by BHEM using all the words appears in enriched matrix with different confidence thresholds. The results are shown in Figure 9, "w/o confidence" shows the results when we don’t use the confidence values, while "with confidence" shows the results when use the confidence values. Also, "confidence>x" indicates the results when we set all the confidence value smaller than x to 0. The thresholding helps to eliminate the effect of low confident words. As Figure 9 shows, "w/o confidence" leads to the lowest performance, while "with confidence" improves the performance with different number of emotions. The thresholding is also effective. For example, a threshold like 0.3 or 0.4 improves the performance. However, if a large value (e.g., 0.6) is selected as threshold, the performance decreases. This is because a large threshold filters a large number of words from enriched model that decreases the effect of the enriched matrix. 7.4 Convergence Analysis The PSSS approach is based on the EM algorithm for the BHEM (or SHEM) presented in Table 2. This algorithm performs a predefined 990 number of iterations or until convergence. To study the convergence of the algorithm, we repeat our experiments for SO prediction and IQAPs inference tasks using BHEM with different number of iterations. Figure 10 shows that after the first 15 iterations the performance does not change dramatically and is nearly constant when more than 30 iterations are performed. This shows that our algorithm will converge in less than 30 iterations for BHEM. We observed the same pattern in SHEM. 7.5 Bridged Vs. Series Model The bridged and series models are both based on the hidden emotions that were developed to predict the sense sentiment similarity. Although their best results on the SO prediction and IQAPs inference tasks are comparable, they have some significant differences as follows: • BHEM is considerably faster than SHEM. The reason is that, the input matrix of BHEM (i.e., W×R) is significantly smaller than the input matrix of SHEM (i.e., W×D). • In BHEM, the emotional vectors are directly computed from the EM steps. However, the emotional vector of a word in SHEM is computed using the emotional vectors of the documents containing the word. This adds noises to the emotional vectors of the words. • BHEM gives more accurate estimation over type and number of emotions versus SHEM. The reason is explained in Section 7.1. 8 Related Works Sentiment similarity has not received enough attention to date. Most previous works employed semantic similarity of word pairs to address SO prediction and IQAP inference tasks. Turney and Littman (2003) proposed to compute pair-wised mutual information (PMI) between a target word and a set of seed positive and negative words to infer the SO of the target word. They also utilized Latent Semantic Analysis (LSA) (Landauer et al., 1998) as another semantic similarity measure. However, both PMI and LSA are semantic similarity measure. Similarly, Hassan and Radev (2010) presented a graph-based method for predicting SO of words. They constructed a lexical graph where nodes are words and edges connect two words with semantic similarity obtained from Wordnet (Fellbaum 1998). They propagated the SO of a set of seeds through this graph. However, such approaches did not take into account the sentiment similarity between words. Figure 10. Convergence of BHEM In IQAPs, Marneffe et al. (2010) inferred the yes/no answers using SO of the adjectives. If SO of the adjectives have different signs, then the answer conveys no, and Otherwise, if the absolute value of SO for the adjective in question is smaller than the absolute value of the adjective in answer, then the answer conveys yes, and otherwise no. In Mohtarami et al. (2012), we used two semantic similarity measures (PMI and LSA) for the IQAP inference task. We showed that measuring the sentiment similarities between the adjectives in question and answer leads to higher performance as compared to semantic similarity measures. In Mohtarami et al. (2012), we proposed an approach to predict the sentiment similarity of words using their emotional vectors. We assumed that the type and number of emotions are pre-defined and our approach was based on this assumption. However, in previous research, there is little agreement about the number and types of basic emotions. Furthermore, the emotions in different dataset can be varied. We relaxed this assumption in Mohtarami et al., (2013) by considering the emotions as hidden and presented a hidden emotional model called SHEM. This paper also consider the emotions as hidden and presents another hidden emotional model called BHEM that gives more accurate estimation of the numbers and types of the hidden emotions. 9 Conclusion We propose a probabilistic approach to infer the sentiment similarity between word senses with respect to automatically learned hidden emotions. We propose to utilize the correlations between reviews, ratings, and words to learn the hidden emotions. We show the effectiveness of our method in two NLP tasks. Experiments show that our sentiment similarity models lead to effective emotional vector construction and significantly outperform semantic similarity measures for the two NLP task. 991 References Hadi Amiri and Tat S. Chua. 2012. Mining Slang and Urban Opinion Words and Phrases from cQA Services: An Optimization Approach. Proceedings of the fifth ACM international conference on Web search and data mining (WSDM). Pp. 193-202. Christiane Fellbaum. 1998. WordNet: An Electronic Lexical Database. Cambridge, MA: MIT Press. Ahmed Hassan and Dragomir Radev. 2010. Identifying Text Polarity Using Random Walks. Proceeding in the Association for Computational Linguistics (ACL). Pp: 395–403. Aminul Islam and Diana Inkpen. 2008. Semantic text similarity using corpus-based word similarity and string similarity. ACM Transactions on Knowledge Discovery from Data (TKDD). Carroll E. Izard. 1971. The face of emotion. New York: Appleton-Century-Crofts. Soo M. Kim and Eduard Hovy. 2004. Determining the sentiment of opinions. Proceeding of the Conference on Computational Linguistics (COLING). Pp: 1367–1373. Thomas K. Landauer, Peter W. Foltz, and Darrell Laham. 1998. Introduction to Latent Semantic Analysis. Discourse Processes. Pp: 259-284. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning Word Vectors for Sentiment Analysis. Proceeding in the Association for Computational Linguistics (ACL). Pp:142-150. Marie-Catherine D. Marneffe, Christopher D. Manning, and Christopher Potts. 2010. "Was it good? It was provocative." Learning the meaning of scalar adjectives. Proceeding in the Association for Computational Linguistics (ACL). Pp: 167– 176. Mitra Mohtarami, Hadi Amiri, Man Lan, Thanh P. Tran, and Chew L. Tan. 2012. Sense Sentiment Similarity: An Analysis. Proceeding of the Conference on Artificial Intelligence (AAAI). Mitra Mohtarami, Man Lan, and Chew L. Tan. 2013. From Semantic to Emotional Space in Probabilistic Sense Sentiment Analysis. Proceeding of the Conference on Artificial Intelligence (AAAI). Alena Neviarouskaya, Helmut Prendinger, and Mitsuru Ishizuka. 2007. Textual Affect Sensing for Sociable and Expressive Online Communication. Proceedings of the conference on Affective Computing and Intelligent Interaction (ACII). Pp: 218-229. Alena Neviarouskaya, Helmut Prendinger, and Mitsuru Ishizuka. 2009. SentiFul: Generating a Reliable Lexicon for Sentiment Analysis. Proceeding of the conference on Affective Computing and Intelligent Interaction (ACII). Pp: 363-368. Andrew Ortony and Terence J. Turner. 1990. What's Basic About Basic Emotions. American Psychological Association. 97(3), 315-331. Christopher Potts, C. 2011. On the negativity of negation. In Nan Li and David Lutz, eds., Proceedings of Semantics and Linguistic Theory 20, 636-659. Peter D. Turney and Michael L. Littman. 2003. Measuring Praise and Criticism: Inference of Semantic Orientation from Association. ACM Transactions on Information Systems, 21(4), 315– 346. Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing contextual polarity in phrase-level sentiment analysis. Proceeding in HLT-EMNLP. Pp: 347–354. 992
2013
97
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 993–1003, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics A user-centric model of voting intention from Social Media Vasileios Lampos, Daniel Preot¸iuc-Pietro and Trevor Cohn Computer Science Department University of Sheffield, UK {v.lampos,d.preotiuc,t.cohn}@dcs.shef.ac.uk Abstract Social Media contain a multitude of user opinions which can be used to predict realworld phenomena in many domains including politics, finance and health. Most existing methods treat these problems as linear regression, learning to relate word frequencies and other simple features to a known response variable (e.g., voting intention polls or financial indicators). These techniques require very careful filtering of the input texts, as most Social Media posts are irrelevant to the task. In this paper, we present a novel approach which performs high quality filtering automatically, through modelling not just words but also users, framed as a bilinear model with a sparse regulariser. We also consider the problem of modelling groups of related output variables, using a structured multi-task regularisation method. Our experiments on voting intention prediction demonstrate strong performance over large-scale input from Twitter on two distinct case studies, outperforming competitive baselines. 1 Introduction Web Social Media platforms have ushered a new era in human interaction and communication. The main by-product of this activity is vast amounts of user-generated content, a type of information that has already attracted the interest of both marketeers and scientists because it offers – for the first time at a large-scale – unmediated access to peoples’ observations and opinions. One exciting avenue of research concentrates on mining interesting signals automatically from this stream of text input. For example, by exploiting Twitter posts, it is possible to infer time series that correlate with financial indicators (Bollen et al., 2011), track infectious diseases (Lampos and Cristianini, 2010; Lampos et al., 2010; Paul and Dredze, 2011) and, in general, nowcast the magnitude of events emerging in real-life (Sakaki et al., 2010; Lampos and Cristianini, 2012). Other studies suggest ways for modelling opinions encapsulated in this content in order to forge branding strategies (Jansen et al., 2009) or understand various socio-political trends (Tumasjan et al., 2010; O’Connor et al., 2010; Lansdall-Welfare et al., 2012). The main theme of the aforementioned works is linear regression between word frequencies and a real-world quantity. They also tend to incorporate hand-crafted lists of search terms to filter irrelevant content and use sentiment analysis lexicons for extracting opinion bias. Consequently, they are quite often restricted to a specific application and therefore, generalise poorly to new data sets (Gayo-Avello et al., 2011). In this paper, we propose a generic method that aims to be independent of the characteristics described above (use of search terms or sentiment analysis tools). Our approach is able to explore not only word frequencies, but also the space of users by introducing a bilinear formulation for this learning task. Regularised regression on both spaces allows for an automatic selection of the most important terms and users, performing at the same time an improved noise filtering. In addition, more advanced regularisation functions enable multi-task learning schemes that can exploit shared structure in the feature space. The latter property becomes very useful in multi-output regression scenarios, where selected features are expected to have correlated as well as anti-correlated impact on each output (e.g., when inferring voting intentions for competing political parties). We evaluate our methods on the domain of politics using data from the microblogging service of Twitter to infer voting trends. Our pro993 posed framework is able to successfully predict voting intentions for the top-3 and top-4 parties in the United Kingdom (UK) and Austria respectively. In both case studies – bound by different characteristics (including language, time-span and number of users) – the average prediction error is smaller than 1.5% for our best model using multi-task learning. Finally, our qualitative analysis shows that the models uncover interesting and semantically interpretable insights from the data. 2 Data For the evaluation of the proposed methodologies we have created two data sets of Social Media content with different characteristics based in the UK and Austria respectively. They are used for performing regression aiming to infer voting intention polls in those countries. Data processing is performed using the TrendMiner architecture for Social Media analysis (Preot¸iuc-Pietro et al., 2012). 2.1 Tweets from users in the UK The first data set (we refer to it as Cuk) used in our experimental process consists of approx. 60 million tweets produced by approx. 42K UK Twitter users from 30/04/2010 to 13/02/2012. We assumed each user to be from the UK, if the location field in their profile matched with a list of common UK locations and their time zone was set to G.M.T. In this way, we were able to extract hundreds of thousands of UK users, from which we sub-sampled 42K users to be distributed across the UK geographical regions proportionally to their population figures.1 2.2 Tweets for Austria The second data set (Cau) is shorter in terms of the number of users involved (1.1K), its time span (25/01 to 01/12/2012) and, consequently, of the total number of tweets considered (800K). However, this time the selection of users has been made by Austrian political experts who decided which accounts to monitor by subjectively assessing the value of information they may provide towards political-oriented topics. Still, we assume that the different users will produce information of varying quality, and some should be eliminated entirely. However, we emphasise that there may be smaller 1Data collection was performed using Twitter API, http://dev.twitter.com/, to extract all posts for our target users. 5 30 55 80 105 130 155 180 205 230 0 5 10 15 20 25 30 35 40 45 Voting Intention % Time CON LAB LIB (a) 240 voting intention polls for the 3 major parties in the UK (April 2010 to February 2012) 5 20 35 50 65 80 95 0 5 10 15 20 25 30 Voting Intention % Time SPÖ ÖVP FPÖ GRÜ (b) 98 voting intention polls for the 4 major parties in Austria (January to December 2012) Figure 1: Voting intention polls for the UK and Austria. potential gains from user modelling compared to the UK case study. Another important distinction is language, which for this data set is primarily German with some English. 2.3 Ground Truth The ground truth for training and evaluating our regression models is formed by voting intention polls from YouGov (UK) and a collection of Austrian pollsters2 – as none performed high frequency polling – for the Austrian case study. We focused on the three major parties in the UK, namely Conservatives (CON), Labour (LAB) and Liberal Democrats (LBD) and the four major parties in Austria, namely the Social Democratic Party (SP ¨O), People’s Party ( ¨OVP), Freedom Party (FP ¨O) and the Green Alternative Party (GR ¨U). Matching with the time spans of the data sets described in the previous sections, we have acquired 240 unique polls for the UK and 65 polls for Austria. The latter have been expanded to 98 polls by replicating the poll of day i for day 2Wikipedia, http://de.wikipedia.org/wiki/ Nationalratswahl_in_\%D6sterreich_2013. 994 i −1 where possible.3 There exists some interesting variability towards the end for the UK polls (Fig. 1a), whereas for the Austrian case, the main changing point is between the second and the third party (Fig. 1b). 3 Methods The textual content posted on Social Media platforms unarguably contains valuable information, but quite often it is hidden under vast amounts of unstructured user generated input. In this section, we propose a set of methods that build on one another, which aim to filter the non desirable noise and extract the most informative features not only based on word frequencies, but also by incorporating users in this process. 3.1 The bilinear model There exist a number of different possibilities for incorporating user information into a regression model. A simple approach is to expand the feature set, such that each user’s effect on the response variable can be modelled separately. Although flexible, this approach would be doomed to failure due to the sheer size of the resulting feature set, and the propensity to overfit all but the largest of training sets. One solution is to group users into different types, such as journalist, politician, activist, etc., but this presupposes a method for classification or clustering of users which is a non-trivial undertaking. Besides, these na¨ıve approaches fail to account for the fact that most users use similar words to express their opinions, by separately parameterising the model for different users or user groups. We propose to account for individual users while restricting all users to share the same vocabulary. This is formulated as a bilinear predictive model, f(X) = uuuTXwww + β , (1) where X is an m × p matrix of user-word frequencies and uuu and www are the model parameters. Let Q ∈Rn×m×p be a tensor which captures our training inputs, where n, m and p denote the considered number of samples (each sample usually refers to a day), terms and users respectively; Q can simply be interpreted as n versions of X (denoted by Qi in the remainder of the script), a different one for each day, put together. Each element 3This has been carried out to ensure an adequate number of training points in the experimental process. Qijk holds the frequency of term j for user k during the day i in our sample. If a user k has posted ci·k tweets during day i, and cijk ≤ci·k of them contain a term j, then the frequency of j for this day and user is defined as Qijk = cijk ci·k . Aiming to learn sparse sets of users and terms that are representative of the voting intention signal, we formulate our optimisation task as follows: {www∗,uuu∗, β∗} = argmin www,uuu,β n X i=1 uuuTQiwww + β −yi 2 + ψ(www, ρ1) + ψ(uuu, ρ2) , (2) where yyy ∈Rn is the response variable (voting intention), www ∈Rm and uuu ∈Rp denote the term and user weights respectively, uuuTQiwww expresses the bilinear term, β ∈R is a bias term and ψ(·) is a regularisation function with parameters ρ1 or ρ2. The first term in Eq. 2 is the standard regularisation loss function, namely the sum squared error over the training instances.4 In the main formulation of our bilinear model, as the regularisation function ψ(·) we use the elastic net (Zou and Hastie, 2005), an extension of the well-studied ℓ1-norm regulariser, known as the LASSO (Tibshirani, 1996). The ℓ1-norm regularisation has found many applications in several scientific fields as it encourages sparse solutions which reduce the possibility of overfitting and enhance the interpretability of the inferred model (Hastie et al., 2009). The elastic net applies an extra penalty on the ℓ2-norm of the weight vector, and can resolve instability issues of LASSO which arise when correlated predictors exist in the input data (Zhao and Yu, 2006). Its regularisation function ψel(·) is defined by: ψel (www, λ, α) = λ 1 −α 2 ∥www∥2 2 + α∥www∥1  , (3) where λ > 0 and α ∈[0, 1); setting parameter α to its extremes transforms elastic net to ridge regression (α = 0) or vanilla LASSO (α = 1). Eq. 2 can be treated as a biconvex learning task (Al-Khayyal and Falk, 1983), by observing that for a fixed www, learning uuu is a convex problem and vice versa. Biconvex functions and possible applications have been well studied in the optimisation literature (Quesada and Grossmann, 1995; 4Note that other loss functions could be used here, such as logistic loss for classification, or more generally bilinear variations of Generalised Linear Models (Nelder and Wedderburn, 1972). 995 Pirsiavash et al., 2009). Their main advantage is the ability to solve efficiently non-convex problems by a repeated application of two convex processes, i.e., a form of coordinate ascent. In our case, the bilinear technique makes it possible to explore both word and user spaces, while maintaining a modest training complexity. Therefore, in our bilinear approach we divide learning in two phases, where we learn word and user weights respectively. For the first phase we produce the term-scores matrix V ∈Rn×m with elements given by: Vij = p X z=1 uzQijz. (4) V contains weighted sums of term frequencies over all users for the considered set of days. The weights are held in uuu and are representative of each user. The initial optimisation task is formulated as: {www∗, β∗} = argmin www,β ∥Vwww + β −yyy∥2 2 + ψel (www, λ1, α1) , (5) where we aim to learn a sparse but consistent set of weights w∗for the terms of our vocabulary. In the second phase, we are using www∗to form the user-scores matrix D ∈Rn×p: Dik = m X z=1 w∗ zQizk , (6) which now contains weighted sums over all terms for the same set of days. The optimisation task becomes: {uuu∗, β∗} = argmin uuu,β ∥Duuu + β −yyy∥2 2 + ψel (uuu, λ2, α2) . (7) This process continues iteratively by inserting the weights of the second phase back to phase one, and so on until convergence. We cannot claim that a global optimum will be reached, but biconvexity guarantees that our global objective (Eq. 2) will decrease in each step of this iterative process. In the remainder of this paper, we refer to the method described above as Bilinear Elastic Net (BEN). 3.2 Exploiting term-target or user-target relationships The previous model assumes that the response variable yyy holds information about a single inference target. However, the task that we are addressing in this paper usually implies the existence of several targets, i.e., different political parties or politicians. An important property, therefore, is the ability to perform multiple output regression. A simple way of adapting the model to the multiple output scenario is by framing a separate learning problem for each output, but tying together some of the parameters. Here we consider tying together the user weights uuu, to enforce that the same set of users are relevant to all tasks, while learning different term weights. Note that the converse situation, where www’s are tied and uuu’s are independent, can be formulated in an equivalent manner. Suppose that our target variable yyy ∈Rτn refers now to τ political entities, yyy =  yyyT 1yyyT 2...yyyT τ T; in this formation the top n elements of yyy match to the first political entity, the next n elements to the second and so on. In the first phase of the bilinear model, we would have to solve the following optimisation task: {www∗, β∗} = argmin w,β τ X i=1 ∥Vwi wi wi + βi −yi∥2 2 + τ X i=1 ψel (wwwi, λ1, α1) , (8) where V is given by Eq. 4 and www∗∈Rτm denotes the vector of weights which can be sliced into τ sub-vectors {www∗ 1, ...,www∗ τ} each one representing a political entity. In the second phase, sub-vectors www∗ i are used to form the input matrices Di, i ∈{1, ..., τ} with elements given by Eq. 6. The input matrix D′ is formed by the vertical concatenation of all Di user score matrices, i.e., D′ =  DT 1 ... DT τ T, and the optimisation target is equivalent to the one expressed in Eq. 7. Since D′ ∈Rτn×p, the user weight vector uuu∗∈Rp and thus, we are learning a single weight per user and not one per political party as in the previous step. The method described above allows learning different term weights per response variable and then binds them under a shared set of user weights. As mentioned before, one could also try the opposite (i.e., start by expanding the user space); both those models can also be optimised in an iterative process. However, our experiments revealed that those approaches did not improve on the performance of BEN. Still, this behaviour could be problem-specific, i.e., learning different words 996 from a shared set of users (and the opposite) may not be a good modelling practice for the domain of politics. Nevertheless, this observation served as a motivation for the method described in the next section, where we extract a consistent set of words and users that are weighted differently among the considered political entities. 3.3 Multi-task learning with the ℓ1/ℓ2 regulariser All previous models – even when combining all inference targets – were not able to explore relationships across the different task domains; in our case, a task domain is defined by a specific political label or party. Ideally, we would like to make a sparse selection of words and users but with a regulariser that promotes inter-task sharing of structure, so that many features may have a positive influence towards one or more parties, but negative towards the remaining one(s). It is possible to achieve this multi-task learning property by introducing a different set of regularisation constraints in the optimisation function. We perform multi-task learning using an extension of group LASSO (Yuan and Lin, 2006), a method known as ℓ1ℓ1ℓ1/ℓ2ℓ2ℓ2 regularisation (Argyriou et al., 2008; Liu et al., 2009). Group LASSO exploits a predefined group structure on the feature space and tries to achieve sparsity in the group-level, i.e., it does not perform feature selection (unlike the elastic net), but group selection. The ℓ1/ℓ2 regulariser extends this notion for a τ-dimensional response variable. The global optimisation target is now formulated as: {W ∗, U∗,βββ∗} = argmin W,U,βββ τ X t=1 n X i=1 uuuT t Qiwwwt + βt −yti 2 + λ1 m X j=1 ∥Wj∥2 + λ2 p X k=1 ∥Uk∥2, (9) where the input matrix Qi is defined in the same way as earlier, W = [www1 ... wwwτ] is the term weight matrix (each wwwt refers to the t-th political entity or task), equivalently U = [uuu1 ... uuuτ], Wj and Uj denote the j-th rows of weight matrices W and U respectively, and vector βββ ∈Rτ holds the bias terms per task. In this optimisation process, we aim to enforce sparsity in the feature space but in a structured manner. Notice that we are now regularising the ℓ2,1 mixed norm of W and U, which is defined as the sum of the row ℓ2-norms for those matrices. As a result, we expect to encourage the activation of a sparse set of features (corresponding to the rows of W and U), but with nonzero weights across the τ tasks (Argyriou et al., 2008). Consequently, we are performing filtering (many users and words will have zero weights) and, at the same time, assign weights of different magnitude and sign on the selected features, something that suits a political opinion mining application, where pro-A often means anti-B. Eq. 9 can be broken into two convex tasks (following the same notion as in Eqs. 5 and 7), where we individually learn {W,βββ} and then {U,βββ}; each step of the process is a standard linear regression problem with an ℓ1/ℓ2 regulariser. Again, we are able iterate this bilinear process and in each step convexity is guaranteed. We refer to this method as Bilinear Group ℓ1/ℓ2 (BGL). 4 Experiments The proposed models are evaluated on Cuk and Cau which have been introduced in Section 2. We measure predictive performance, compare it to the performance of several competitive baselines, and provide a qualitative analysis of the parameters learned by the models. 4.1 Data preprocessing Basic preprocessing has been applied on the vocabulary index of Cuk and Cau aiming to filter out some of the word features and partially reduce the dimensionality of the problem. Stop words and web links were removed in both sets, together with character sequences of length <4 and <3 for Cuk and Cau respectively.5 As the vocabulary size of Cuk was significantly larger, for this data set we have additionally merged Twitter hashtags (i.e., words starting with ‘#’) with their exact non topic word match, where possible (by dropping the ‘#’ when the word existed in the index). After performing the preprocessing routines described above, the vocabulary sizes for Cuk and Cau were set to 80,976 and 22,917 respectively. 4.2 Predictive accuracy To evaluate the predictive accuracy of our methods, we have chosen to emulate a real-life scenario 5Most of the times those character sequences were not valid words. This pattern was different in each language and thus, a different filtering threshold was applied in each data set. 997 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 0 0.4 0.8 1.2 1.6 2 2.4 Step Global Objective RMSE Figure 2: Global objective function and RMSE on a validation set for BEN in 15 iterations (30 steps) of the model. of voting intention prediction. The evaluation process starts by using a fixed set of polls matching to consecutive time points in the past for training and validating the parameters of each model. Testing is performed on the following δ (unseen) polls of the data set. In the next step of the evaluation process, the training/validation set is increased by merging it with the previously used test set (δ polls), and testing is now performed on the next δ unseen polls. In our experiments, the number of steps in this evaluation process is set to 10 and in each step the size of the test set is set to δ = 5 polls. Hence, each model is tested on 50 unseen and consecutive in time samples. The loss function in our evaluation is the standard Mean Square Error (MSE), but to allow a better interpretation of the results, we display its root (RMSE) in tables and figures.6 The parameters of each model (αi for BEN and λi for BEN and BGL, i ∈{1, 2}) are optimised using a held-out validation set by performing grid search. Note that it may be tempting to adapt the regularisation parameters in each phase of the iterative training loop, however this would change the global objective (see Eqs. 2 and 9) and thus convergence will not be guaranteed. A key question is how many iterations of training are required to reach convergence. Figure 2 illustrates how the BEN global objective function (Eq. 2) converges during this iterative process and the model’s performance on an unseen validation set. Notice that there is a large performance improvement after the first step (which alone is a linear solver), but overfitting occurs after step 11. Based on this result, for subsequent experiments we run the training process for two iterations (4 steps), and take the 6RMSE has the same metric units as the response variable. CON LAB LBD µµµ Bµµµ 2.272 1.663 1.136 1.69 Blast 2 2.074 1.095 1.723 LEN 3.845 2.912 2.445 3.067 BEN 1.939 1.644 1.136 1.573 BGL 1.785 1.785 1.785 1.595 1.595 1.595 1.054 1.054 1.054 1.478 1.478 1.478 Table 1: UK case study — Average RMSEs representing the error of the inferred voting intention percentage for the 10-step validation process; µµµ denotes the mean RMSE across the three political parties for each baseline or inference method. SP ¨O ¨OVP FP ¨O GR ¨U µµµ Bµµµ 1.535 1.373 3.3 1.197 1.851 Blast 1.148 1.148 1.148 1.556 1.639 1.639 1.639 1.536 1.47 LEN 1.291 1.286 2.039 1.152 1.152 1.152 1.442 BEN 1.392 1.31 2.89 1.205 1.699 BGL 1.619 1.005 1.005 1.005 1.757 1.374 1.439 1.439 1.439 Table 2: Austrian case study — Average RMSEs for the 10-step validation process. best performing model on the held-out validation set. We compare the performance of our methods with three baselines. The first makes a constant prediction of the mean value of the response variable yyy in the training set (Bµµµ); the second predicts the last value of yyy (Blast); and the third baseline (LEN) is a linear regression over the terms using elastic net regularisation. Recalling that each test set is made of 5 polls, Blast should be considered as a hard baseline to beat7 given that voting intentions tend to have a smooth behaviour. Moreover, improving on LEN partly justifies the usefulness of a bilinear approach compared to a linear one. Performance results comparing inferred voting intention percentages and polls for Cuk and Cau are presented in Tables 1 and 2 respectively. For the UK case study, both BEN and BGL are able to beat all baselines in average performance across all parties. However in the Austrian case study, LEN performs better that BEN, something that could be justified by the fact that the users in Cau were selected by domain experts, and consequently there was not much gain to be had by filtering them further. Nevertheless, the difference in performance was rather small (approx. 0.26% error) and the in7The last response value could be easily included as a feature in the model, and would likely improve predictive performance. 998 5 10 15 20 25 30 35 40 45 0 5 10 15 20 25 30 35 40 Voting Intention % Time CON LAB LIB (a) Ground Truth (polls) 5 10 15 20 25 30 35 40 45 0 5 10 15 20 25 30 35 40 Voting Intention % Time CON LAB LIB (b) BEN 5 10 15 20 25 30 35 40 45 0 5 10 15 20 25 30 35 40 Voting Intention % Time CON LAB LIB (c) BGL Figure 3: UK case study — Voting intention inference results (50 polls, 3 parties). Sub-figure 3a is a plot of ground truth as presented in voting intention polls (Fig. 1a). ferences of LEN and BEN followed a very similar pattern (¯ρ = .94 with p < 10−10).8 Multi-task learning (BGL) delivered the best inference performance in both case studies, which was on average smaller than 1.48% (RMSE). Inferences for both BEN and BGL have been plotted on Figures 3 and 4. They are presented as continuous lines of 50 inferred points (per party) which are created by concatenating the inferences 8Pearson’s linear correlation averaged across the four Austrian parties. 5 10 15 20 25 30 35 40 45 0 5 10 15 20 25 30 Voting Intention % Time SPÖ ÖVP FPÖ GRÜ (a) Ground Truth (polls) 5 10 15 20 25 30 35 40 45 0 5 10 15 20 25 30 Voting Intention % Time SPÖ ÖVP FPÖ GRÜ (b) BEN 5 10 15 20 25 30 35 40 45 0 5 10 15 20 25 30 Voting Intention % Time SPÖ ÖVP FPÖ GRÜ (c) BGL Figure 4: Austrian case study — Voting intention inference results (50 polls, 4 parties). Sub-figure 4a is a plot of ground truth as presented in voting intention polls (Fig. 1b). on all test sets.9 For the UK case study, one may observe that BEN (Fig. 3b) cannot register any change – with the exception of one test point – in the leading party fight (CON versus LAB); BGL (Fig. 3c) performs much better in that aspect. In the Austrian case study this characteristic becomes more obvious. BEN (Fig. 4b) consistently predicts the wrong ranking of ¨OVP and FP ¨O, whereas BGL (Fig. 4c) does much better. Most importantly, a 9Voting intention polls were plotted separately to allow a better presentation. 999 Party Tweet Score Author CON PM in friendly chat with top EU mate, Sweden’s Fredrik Reinfeldt, before family photo 1.334 Journalist Have Liberal Democrats broken electoral rules? Blog on Labour complaint to cabinet secretary −0.991 Journalist LAB Blog Post Liverpool: City of Radicals Website now Live <link> #liverpool #art 1.954 Art Fanzine I am so pleased to hear Paul Savage who worked for the Labour group has been Appointed the Marketing manager for the baths hall GREAT NEWS −0.552 Politician (Labour) LBD RT @user: Must be awful for TV bosses to keep getting knocked back by all the women they ask to host election night (via @user) 0.874 LibDem MP Blog Post Liverpool: City of Radicals 2011 – More Details Announced #liverpool #art −0.521 Art Fanzine SP ¨O Inflationsrate in ¨O. im Juli leicht gesunken: von 2,2 auf 2,1%. Teurer wurde Wohnen, Wasser, Energie. Translation: Inflation rate in Austria slightly down in July from 2,2 to 2,1%. Accommodation, Water, Energy more expensive. 0.745 Journalist Hans Rauscher zu Felix #Baumgartner “A klaner Hitler” <link> Translation: Hans Rauscher on Felix #Baumgartner “A little Hitler” <link> −1.711 Journalist ¨OVP #IchPirat setze mich daf¨ur ein, dass eine große Koalition mathematisch verhindert wird! 1.Geige: #Gruene + #FPOe + #OeVP Translation: #IPirate am committed to prevent a grand coalition mathematically! Calling the tune: #Greens + #FPO + #OVP 4.953 User kann das buch “res publica” von johannes #voggenhuber wirklich empfehlen! so zum nachdenken und so... #europa #demokratie Translation: can really recommend the book “res publica” by johannes #voggenhuber! Food for thought and so on #europe #democracy −2.323 User FP ¨O Neue Kampagne der #Krone zur #Wehrpflicht: “GIB BELLO EINE STIMME!” Translation: New campaign by the #Krone on #Conscription: “GIVE WOOFY A VOICE!” 7.44 Political satire Kampagne der Wiener SP ¨O “zum Zusammenleben” spielt Rechtspopulisten in die H¨ande <link> Translation: Campaign of the Viennese SP ¨O on “Living together” plays right into the hands of right-wing populists <link> −3.44 Human Rights GR ¨U Protestsong gegen die Abschaffung des Bachelor-Studiums Internationale Entwicklung: <link> #IEbleibt #unibrennt #uniwut Translation: Protest songs against the closing-down of the bachelor course of International Development: <link> #IDremains #uniburns #unirage 1.45 Student Union Pilz “ich will in dieser Republik weder kriminelle Asylwerber, noch kriminelle orange Politiker” - BZ ¨O-Abschiebung ok, aber wohin? #amPunkt Translation: Pilz “i want neither criminal asylum-seekers, nor criminal orange politicians in this republic” - BZ ¨O-Deportation OK, but where? #amPunkt −2.172 User Table 3: Examples of tweets amongst the ones with top positive and negative scores per party for both Cuk and Cau data sets (tweets in Austrian have been translated in English as well). Notice that weight magnitude may differ per case study and party as they are based on the range of the response variable and the total number of selected features. general observation is that BEN’s predictions are smooth and do not vary significantly with time. This might be a result of overfitting the model to a single response variable which usually has a smooth behaviour. On the contrary, the multitask learning property of BGL reduces this type of overfitting providing more statistical evidence for the terms and users and thus, yielding not only a better inference performance, but also a more accurate model. 4.3 Qualitative Analysis In this section, we refer to features that have been selected and weighted as significant by our bilinear learning functions. Based on the weights for the word and the user spaces that we retrieve after the application of BGL in the last step of the evaluation process (see the previous section), we compute a score (weighted sum) for each tweet in our training data sets for both Cuk and Cau. Table 3 shows examples of interesting tweets amongst the top weighted ones (positively as well as negatively) per party. Together with their text (anonymised for privacy reasons) and scores, we also provide an attribute for the author (if present). In the displayed tweets for the UK study, the only possible outlier is the ‘Art Fanzine’; still, it seems to register a consistent behaviour (positive towards 1000 LAB, negative towards LBD) and, of course, hidden, indirect relationships may exist between political opinion and art. The Austrian case study revealed even more interesting tweets since training was conducted on data from a very active preelection period (we made an effort to translate those tweets in English language as well). For a better interpretation of the presented tweets, it may be useful to know that ‘Johannes Voggenhuber’ (who receives a positive comment for his book) and ‘Peter Pilz’ (whose comment is questioned) are members of GR ¨U, ‘Krone’ (or Kronen Zeitung) is the major newspaper in Austria10 and that FP ¨O is labelled as a far right party, something that may cause various reactions from ‘Human Rights’ organisations. 5 Related Work The topic of political opinion mining from Social Media has been the focus of various recent research works. Several papers have presented methods that aim to predict the result of an election (Tumasjan et al., 2010; Bermingham and Smeaton, 2011) or to model voting intention and other kinds of socio-political polls (O’Connor et al., 2010; Lampos, 2012). Their common feature is a methodology based on a meta-analysis of word frequencies using off-the-shelf sentiment tools such as LIWC (Pennebaker et al., 2007) or Senti-WordNet (Esuli and Sebastiani, 2006). Moreover, the proposed techniques tend to incorporate posting volume figures as well as handcrafted lists of words relevant to the task (e.g., names of politicians or parties) in order to filter the content successfully. Such papers have been criticised as their methods do not generalise when applied on different data sets. According to the work in (Gayo-Avello et al., 2011), the methods presented in (Tumasjan et al., 2010) and (O’Connor et al., 2010) failed to predict the result of US congressional elections in 2009. We disagree with the arguments supporting the statement “you cannot predict elections with Twitter” (Gayo-Avello, 2012), as many times in the past actual voting intention polls have also failed to predict election outcomes, but we agree that most methods that have been proposed so far were not entirely generic. It is a fact that the 10“Accused of abusing its near monopoly to manipulate public opinion in Austria”, Wikipedia, 19/02/2013, http: //en.wikipedia.org/wiki/Kronen_Zeitung. majority of sentiment analysis tools are Englishspecific (or even American English) and, most importantly, political word lists (or ontologies) change in time, per country and per party; hence, generalisable methods should make an effort to limit reliance from such tools. Furthermore, our work – indirectly – meets the guidelines proposed in (Metaxas et al., 2011) as we have developed a framework of “well-defined” algorithms that are “Social Web aware” (since the bilinear approach aims to improve noise filtering) and that have been tested on two evaluation scenarios with distinct characteristics. 6 Conclusions and Future Work We have presented a novel method for text regression that exploits both word and user spaces by solving a bilinear optimisation task, and an extension that applies multi-task learning for multioutput inference. Our approach performs feature selection – hence, noise filtering – on large-scale user-generated inputs automatically, generalises across two languages without manual adaptations and delivers some significant improvements over strong performance baselines (< 1.5% error when predicting polls). The application domain in this paper was politics, though the presented methods are generic and could be easily applied on various other domains, such as health or finance. Future work may investigate further modelling improvements achieved by applying different regularisation functions as well as the adaptation of the presented models to classification problems. Finally, in the application level, we aim at an indepth analysis of patterns and characteristics in the extracted sets of features by collaborating with domain experts (e.g., political analysts). Acknowledgments This work was funded by the TrendMiner project (EU-FP7-ICT n.287863). All authors would like to thank the political analysts (and especially Paul Ringler) from SORA11 for their useful insights on politics in Austria. 11SORA – Institute for Social Research and Consulting, http://www.sora.at. 1001 References Faiz A Al-Khayyal and James E Falk. 1983. Jointly Constrained Biconvex Programming. Mathematics of Operations Research, 8(2):273–286. Andreas Argyriou, Theodoros Evgeniou, and Massimiliano Pontil. 2008. Convex multi-task feature learning. Machine Learning, 73(3):243–272, January. Adam Bermingham and Alan F Smeaton. 2011. On using Twitter to monitor political sentiment and predict election results. In Proceedings of the Workshop on Sentiment Analysis where AI meets Psychology (SAAIP 2011), pages 2–10, November. Johan Bollen, Huina Mao, and Xiaojun Zeng. 2011. Twitter mood predicts the stock market. Journal of Computational Science, 2(1):1–8, March. Andrea Esuli and Fabrizio Sebastiani. 2006. SentiWordNet: A publicly available lexical resource for opinion mining. In Proceeding of the 5th Conference on Language Resources and Evaluation (LREC), pages 417–422. Daniel Gayo-Avello, Panagiotis T Metaxas, and Eni Mustafaraj. 2011. Limits of Electoral Predictions using Twitter. In Proceedings of the Fifth International AAAI Conference on Weblogs and Social Media (ICWSM), pages 490–493. Daniel Gayo-Avello. 2012. No, You Cannot Predict Elections with Twitter. IEEE Internet Computing, 16(6):91–94, November. Trevor Hastie, Robert Tibshirani, and Jerome Friedman. 2009. The Elements of Statistical Learning. Springer Series in Statistics. Springer. Bernard J Jansen, Mimi Zhang, Kate Sobel, and Abdur Chowdury. 2009. Twitter power: Tweets as electronic word of mouth. Journal of the American Society for Information Science and Technology, 60(11):2169–2188. Vasileios Lampos and Nello Cristianini. 2010. Tracking the flu pandemic by monitoring the Social Web. In 2nd IAPR Workshop on Cognitive Information Processing, pages 411–416. IEEE Press. Vasileios Lampos and Nello Cristianini. 2012. Nowcasting Events from the Social Web with Statistical Learning. ACM Transactions on Intelligent Systems and Technology, 3(4):1–22, September. Vasileios Lampos, Tijl De Bie, and Nello Cristianini. 2010. Flu Detector - Tracking Epidemics on Twitter. In Proceedings of European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD), pages 599– 602. Springer. Vasileios Lampos. 2012. On voting intentions inference from Twitter content: a case study on UK 2010 General Election. CoRR, April. Thomas Lansdall-Welfare, Vasileios Lampos, and Nello Cristianini. 2012. Effects of the recession on public mood in the UK. In Proceedings of the 21st international conference companion on World Wide Web, WWW ’12 Companion, pages 1221– 1226. ACM. Jun Liu, Shuiwang Ji, and Jieping Ye. 2009. Multitask feature learning via efficient l2,1-norm minimization. pages 339–348, June. Panagiotis T Metaxas, Eni Mustafaraj, and Daniel Gayo-Avello. 2011. How (Not) To Predict Elections. In IEEE 3rd International Conference on Social Computing (SocialCom), pages 165 – 171. IEEE Press. John A Nelder and Robert W M Wedderburn. 1972. Generalized Linear Models. Journal of the Royal Statistical Society - Series A (General), 135(3):370. Brendan O’Connor, Ramnath Balasubramanyan, Bryan R Routledge, and Noah A Smith. 2010. From Tweets to Polls: Linking Text Sentiment to Public Opinion Time Series. In Proceedings of the International AAAI Conference on Weblogs and Social Media, pages 122–129. AAAI Press. Michael J Paul and Mark Dredze. 2011. You Are What You Tweet: Analyzing Twitter for Public Health. Proceedings of the 5th International AAAI Conference on Weblogs and Social Media, pages 265–272. James W Pennebaker, Cindy K Chung, Molly Ireland, Amy Gonzales, and Roger J Booth. 2007. The Development and Psychometric Properties of LIWC2007. Technical report, Universities of Texas at Austin & University of Auckland, New Zealand. Hamed Pirsiavash, Deva Ramanan, and Charless Fowlkes. 2009. Bilinear classifiers for visual recognition. In Advances in Neural Information Processing Systems, volume 22, pages 1482–1490. Daniel Preot¸iuc-Pietro, Sina Samangooei, Trevor Cohn, Nicholas Gibbins, and Mahesan Niranjan. 2012. Trendminer: An Architecture for Real Time Analysis of Social Media Text. In Sixth International AAAI Conference on Weblogs and Social Media, pages 38–42. AAAI Press, July. Ignacio Quesada and Ignacio E Grossmann. 1995. A global optimization algorithm for linear fractional and bilinear programs. Journal of Global Optimization, 6(1):39–76, January. Takeshi Sakaki, Makoto Okazaki, and Yutaka Matsuo. 2010. Earthquake shakes Twitter users: real-time event detection by social sensors. In Proceedings of the 19th international conference on World Wide Web (WWW), pages 851–860. ACM. Robert Tibshirani. 1996. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society - Series B (Methodological), 58(1):267–288. 1002 Andranik Tumasjan, Timm O Sprenger, Philipp G Sandner, and Isabell M Welpe. 2010. Predicting elections with Twitter: What 140 characters reveal about political sentiment. In Proceedings of the 4th International AAAI Conference on Weblogs and Social Media, pages 178–185. AAAI. Ming Yuan and Yi Lin. 2006. Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society - Series B: Statistical Methodology, 68(1):49–67. Peng Zhao and Bin Yu. 2006. On model selection consistency of Lasso. Journal of Machine Learning Research, 7(11):2541–2563. Hui Zou and Trevor Hastie. 2005. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(2):301–320, April. 1003
2013
98
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1004–1013, Sofia, Bulgaria, August 4-9 2013. c⃝2013 Association for Computational Linguistics Using Supervised Bigram-based ILP for Extractive Summarization Chen Li, Xian Qian, and Yang Liu The University of Texas at Dallas Computer Science Department chenli,qx,[email protected] Abstract In this paper, we propose a bigram based supervised method for extractive document summarization in the integer linear programming (ILP) framework. For each bigram, a regression model is used to estimate its frequency in the reference summary. The regression model uses a variety of indicative features and is trained discriminatively to minimize the distance between the estimated and the ground truth bigram frequency in the reference summary. During testing, the sentence selection problem is formulated as an ILP problem to maximize the bigram gains. We demonstrate that our system consistently outperforms the previous ILP method on different TAC data sets, and performs competitively compared to the best results in the TAC evaluations. We also conducted various analysis to show the impact of bigram selection, weight estimation, and ILP setup. 1 Introduction Extractive summarization is a sentence selection problem: identifying important summary sentences from one or multiple documents. Many methods have been developed for this problem, including supervised approaches that use classifiers to predict summary sentences, graph based approaches to rank the sentences, and recent global optimization methods such as integer linear programming (ILP) and submodular methods. These global optimization methods have been shown to be quite powerful for extractive summarization, because they try to select important sentences and remove redundancy at the same time under the length constraint. Gillick and Favre (Gillick and Favre, 2009) introduced the concept-based ILP for summarization. Their system achieved the best result in the TAC 09 summarization task based on the ROUGE evaluation metric. In this approach the goal is to maximize the sum of the weights of the language concepts that appear in the summary. They used bigrams as such language concepts. The association between the language concepts and sentences serves as the constraints. This ILP method is formally represented as below (see (Gillick and Favre, 2009) for more details): max P i wici (1) s.t. sjOccij ≤ci (2) P j sjOccij ≥ci (3) P j ljsj ≤L (4) ci ∈{0, 1} ∀i (5) sj ∈{0, 1} ∀j (6) ci and sj are binary variables (shown in (5) and (6)) that indicate the presence of a concept and a sentence respectively. wi is a concept’s weight and Occij means the occurrence of concept i in sentence j. Inequalities (2)(3) associate the sentences and concepts. They ensure that selecting a sentence leads to the selection of all the concepts it contains, and selecting a concept only happens when it is present in at least one of the selected sentences. There are two important components in this concept-based ILP: one is how to select the concepts (ci); the second is how to set up their weights (wi). Gillick and Favre (Gillick and Favre, 2009) used bigrams as concepts, which are selected from a subset of the sentences, and their document frequency as the weight in the objective function. In this paper, we propose to find a candidate summary such that the language concepts (e.g., bigrams) in this candidate summary and the reference summary can have the same frequency. We expect this restriction is more consistent with the 1004 ROUGE evaluation metric used for summarization (Lin, 2004). In addition, in the previous conceptbased ILP method, the constraints are with respect to the appearance of language concepts, hence it cannot distinguish the importance of different language concepts in the reference summary. Our method can decide not only which language concepts to use in ILP, but also the frequency of these language concepts in the candidate summary. To estimate the bigram frequency in the summary, we propose to use a supervised regression model that is discriminatively trained using a variety of features. Our experiments on several TAC summarization data sets demonstrate this proposed method outperforms the previous ILP system and often the best performing TAC system. 2 Proposed Method 2.1 Bigram Gain Maximization by ILP We choose bigrams as the language concepts in our proposed method since they have been successfully used in previous work. In addition, we expect that the bigram oriented ILP is consistent with the ROUGE-2 measure widely used for summarization evaluation. We start the description of our approach for the scenario where a human abstractive summary is provided, and the task is to select sentences to form an extractive summary. Then Our goal is to make the bigram frequency in this system summary as close as possible to that in the reference. For each bigram b, we define its gain: Gain(b, sum) = min{nb,ref, nb,sum} (7) where nb,ref is the frequency of b in the reference summary, and nb,sum is the frequency of b in the automatic summary. The gain of a bigram is no more than its frequency in the reference summary, hence adding redundant bigrams will not increase the gain. The total gain of an extractive summary is defined as the sum of every bigram gain in the summary: Gain(sum) = X b Gain(b, sum) = X b min{nb,ref, X s z(s) ∗nb,s} (8) where s is a sentence in the document, nb,s is the frequency of b in sentence s, z(s) is a binary variable, indicating whether s is selected in the summary. The goal is to find z that maximizes Gain(sum) (formula (8)) under the length constraint L. This problem can be casted as an ILP problem. First, using the fact that min{a, x} = 0.5(−|x −a| + x + a), x, a ≥0 we have X b min{nb,ref, X s z(s) ∗nb,s} = X b 0.5 ∗(−|nb,ref − X s z(s) ∗nb,s| + nb,ref + X s z(s) ∗nb,s) Now the problem is equivalent to: max z X b (−|nb,ref − X s z(s) ∗nb,s| + nb,ref + X s z(s) ∗nb,s) s.t. X s z(s) ∗|S| ≤L; z(s) ∈{0, 1} This is equivalent to the ILP: max X b ( X s z(s) ∗nb,s −Cb) (9) s.t. X s z(s) ∗|S| ≤L (10) z(s) ∈{0, 1} (11) −Cb ≤nb,ref − X s z(s) ∗nb,s ≤Cb (12) where Cb is an auxiliary variable we introduce that is equal to |nb,ref −P s z(s) ∗nb,s|, and nb,ref is a constant that can be dropped from the objective function. 2.2 Regression Model for Bigram Frequency Estimation In the previous section, we assume that nb,ref is at hand (reference abstractive summary is given) and propose a bigram-based optimization framework for extractive summarization. However, for the summarization task, the bigram frequency is unknown, and thus our first goal is to estimate such frequency. We propose to use a regression model for this. Since a bigram’s frequency depends on the summary length (L), we use a normalized frequency 1005 in our method. Let nb,ref = Nb,ref ∗L, where Nb,ref = n(b,ref) P b n(b,ref) is the normalized frequency in the summary. Now the problem is to automatically estimate Nb,ref. Since the normalized frequency Nb,ref is a real number, we choose to use a logistic regression model to predict it: Nb,ref = exp{w′f(b)} P j exp{w′f(bj)} (13) where f(bj) is the feature vector of bigram bj and w′ is the corresponding feature weight. Since even for identical bigrams bi = bj, their feature vectors may be different (f(bi) ̸= f(bj)) due to their different contexts, we sum up frequencies for identical bigrams {bi|bi = b}: Nb,ref = X i,bi=b Nbi,ref = P i,bi=b exp{w′f(bi)} P j exp{w′f(bj)} (14) To train this regression model using the given reference abstractive summaries, rather than trying to minimize the squared error as typically done, we propose a new objective function. Since the normalized frequency satisfies the probability constraint P b Nb,ref = 1, we propose to use KL divergence to measure the distance between the estimated frequencies and the ground truth values. The objective function for training is thus to minimize the KL distance: min X b e Nb,ref log e Nb,ref Nb,ref (15) where e Nb,ref is the true normalized frequency of bigram b in reference summaries. Finally, we replace Nb,ref in Formula (15) with Eq (14) and get the objective function below: max X b e Nb,ref log P i,bi=b exp{w′f(bi)} P j exp{w′f(bj)} (16) This shares the same form as the contrastive estimation proposed by (Smith and Eisner, 2005). We use gradient decent method for parameter estimation, initial w is set with zero. 2.3 Features Each bigram is represented using a set of features in the above regression model. We use two types of features: word level and sentence level features. Some of these features have been used in previous work (Aker and Gaizauskas, 2009; Brandow et al., 1995; Edmundson, 1969; Radev, 2001): • Word Level: – 1. Term frequency1: The frequency of this bigram in the given topic. – 2. Term frequency2: The frequency of this bigram in the selected sentences1. – 3. Stop word ratio: Ratio of stop words in this bigram. The value can be {0, 0.5, 1}. – 4. Similarity with topic title: The number of common tokens in these two strings, divided by the length of the longer string. – 5. Similarity with description of the topic: Similarity of the bigram with topic description (see next data section about the given topics in the summarization task). • Sentence Level: (information of sentence containing the bigram) – 6. Sentence ratio: Number of sentences that include this bigram, divided by the total number of the selected sentences. – 7. Sentence similarity: Sentence similarity with topic’s query, which is the concatenation of topic title and description. – 8. Sentence position: Sentence position in the document. – 9. Sentence length: The number of words in the sentence. – 10. Paragraph starter: Binary feature indicating whether this sentence is the beginning of a paragraph. 3 Experiments 3.1 Data We evaluate our method using several recent TAC data sets, from 2008 to 2011. The TAC summarization task is to generate at most 100 words summaries from 10 documents for a given topic query (with a title and more detailed description). For model training, we also included two years’ DUC data (2006 and 2007). When evaluating on one TAC data set, we use the other years of the TAC data plus the two DUC data sets as the training data. 1See next section about the sentence selection step 1006 3.2 Summarization System We use the same system pipeline described in (Gillick et al., 2008; McDonald, 2007). The key modules in the ICSI ILP system (Gillick et al., 2008) are briefly described below. • Step 1: Clean documents, split text into sentences. • Step 2: Extract bigrams from all the sentences, then select those bigrams with document frequency equal to more than 3. We call this subset as initial bigram set in the following. • Step 3: Select relevant sentences that contain at least one bigram from the initial bigram set. • Step 4: Feed the ILP with sentences and the bigram set to get the result. • Step 5: Order sentences identified by ILP as the final result of summary. The difference between the ICSI and our system is in the 4th step. In our method, we first extract all the bigrams from the selected sentences and then estimate each bigram’s Nb,ref using the regression model. Then we use the top-n bigrams with their Nb,ref and all the selected sentences in our proposed ILP module for summary sentence selection. When training our bigram regression model, we use each of the 4 reference summaries separately, i.e., the bigram frequency is obtained from one reference summary. The same pre-selection of sentences described above is also applied in training, that is, the bigram instances used in training are from these selected sentences and the reference summary. 4 Experiment and Analysis 4.1 Experimental Results Table 1 shows the ROUGE-2 results of our proposed system, the ICSI system, and also the best performing system in the NIST TAC evaluation. We can see that our proposed system consistently outperforms ICSI ILP system (the gain is statistically significant based on ROUGE’s 95% confidence internal results). Compared to the best reported TAC result, our method has better performance on three data sets, except 2011 data. Note that the best performing system for the 2009 data is the ICSI ILP system, with an additional compression step. Our ILP method is purely extractive. Even without using compression, our approach performs better than the full ICSI system. The best performing system for the 2011 data also has some compression module. We expect that after applying sentence compression and merging, we will have even better performance, however, our focus in this paper is on the bigram-based extractive summarization. ICSI Proposed TAC Rank1 ILP System System 2008 0.1023 0.1076 0.1038 2009 0.1160 0.1246 0.1216 2010 0.1003 0.1067 0.0957 2011 0.1271 0.1327 0.1344 Table 1: ROUGE-2 summarization results. There are several differences between the ICSI system and our proposed method. First is the bigrams (concepts) used. We use the top 100 bigrams from our bigram estimation module; whereas the ICSI system just used the initial bigram set described in Section 3.2. Second, the weights for those bigrams differ. We used the estimated value from the regression model; the ICSI system just uses the bigram’s document frequency in the original text as weight. Finally, two systems use different ILP setups. To analyze which factors (or all of them) explain the performance difference, we conducted various controlled experiments for these three factors (bigrams, weights, ILP). All of the following experiments use the TAC 2009 data as the test set. 4.2 Effect of Bigram Weights In this experiment, we vary the weighting methods for the two systems: our proposed method and the ICSI system. We use three weighting setups: the estimated bigram frequency value in our method, document frequency, or term frequency from the original text. Table 2 and 3 show the results using the top 100 bigrams from our system and the initial bigram set from the ICSI system respectively. We also evaluate using the two different ILP configurations in these experiments. First of all, we can see that for both ILP systems, our estimated bigram weights outperform the other frequency-based weights. For the ICSI ILP system, using bigram document frequency achieves better performance than term frequency (which verified why document frequency is used in their system). In contrast, for our ILP method, 1007 # Weight ILP ROUGE-2 1 Estimated value Proposed 0.1246 2 ICSI 0.1178 3 Document freq Proposed 0.1109 4 ICSI 0.1132 5 Term freq Proposed 0.1116 6 ICSI 0.1080 Table 2: Results using different weighting methods on the top 100 bigrams generated from our proposed system. # Weight ILP ROUGE-2 1 Estimated value Proposed 0.1157 2 ICSI 0.1161 3 Document freq Proposed 0.1101 4 ICSI 0.1160 5 Term freq Proposed 0.1109 6 ICSI 0.1072 Table 3: Results using different weighting methods based on the initial bigram sets. The average number of bigrams is around 80 for each topic. the bigram’s term frequency is slightly more useful than its document frequency. This indicates that our estimated value is more related to bigram’s term frequency in the original text. When the weight is document frequency, the ICSI’s result is better than our proposed ILP; whereas when using term frequency as the weights, our ILP has better results, again suggesting term frequency fits our ILP system better. When the weight is estimated value, the results depend on the bigram set used. The ICSI’s ILP performs slightly better than ours when it is equipped with the initial bigram, but our proposed ILP has much better results using our selected top100 bigrams. This shows that the size and quality of the bigrams has an impact on the ILP modules. 4.3 The Effect of Bigram Set’s size In our proposed system, we use 100 top bigrams. There are about 80 bigrams used in the ICSI ILP system. A natural question to ask is the impact of the number of bigrams and their quality on the summarization system. Table 4 shows some statistics of the bigrams. We can see that about one third of bigrams in the reference summary are in the original text (127.3 out of 321.93), verifying that people do use different words/bigram when writing abstractive summaries. We mentioned that we only use the top-N (n is 100 in previous experiments) bigrams in our summarization system. On one hand, this is to save computational cost for the ILP module. On the other hand, we see from the table that only 127 of these more than 2K bigrams are in the reference summary and are thus expected to help the summary responsiveness. Including all the bigrams would lead to huge noise. # bigrams in ref summary 321.93 # bigrams in text and ref summary 127.3 # bigrams used in our regression model 2140.7 (i.e., in selected sentences) Table 4: Bigram statistics. The numbers are the average ones for each topic. Fig 1 shows the bigram coverage (number of bigrams used in the system that are also in reference summaries) when we vary N selected bigrams. As expected, we can see that as n increases, there are more reference summary bigrams included in the system. There are 25 summary bigrams in the top-50 bigrams and about 38 in top-100 bigrams. Compared with the ICSI system that has around 80 bigrams in the initial bigram set and 29 in the reference summary, our estimation module has better coverage. 0 10 20 30 40 50 60 70 80 90 100 110 120 130 50 500 950 1400 1850 2300 2750 3200 Number of Selected Bigram Number of Bigram both in Selected and Reference Figure 1: Coverage of bigrams (number of bigrams in reference summary) when varying the number of bigrams used in the ILP systems. Increasing the number of bigrams used in the system will lead to better coverage, however, the incorrect bigrams also increase and have a negative impact on the system performance. To examine the best tradeoff, we conduct the experiments by choosing the different top-N bigram set for the two ILP systems, as shown in Fig 2. For both the ILP systems, we used the estimated weight value for the bigrams. 1008 We can see that the ICSI ILP system performs better when the input bigrams have less noise (those bigrams that are not in summary). However, our proposed method is slightly more robust to this kind of noise, possibly because of the weights we use in our system – the noisy bigrams have lower weights and thus less impact on the final system performance. Overall the two systems have similar trends: performance increases at the beginning when using more bigrams, and after certain points starts degrading with too many bigrams. The optimal number of bigrams differs for the two systems, with a larger number of bigrams in our method. We also notice that the ICSI ILP system achieved a ROUGE-2 of 0.1218 when using top 60 bigrams, which is better than using the initial bigram set in their method (0.1160). 0.109 0.111 0.113 0.115 0.117 0.119 0.121 0.123 0.125 40 50 60 70 80 90 100 110 120 130 Number of selected bigram Rouge-2 Proposed ILP ICSI Figure 2: Summarization performance when varying the number of bigrams for two systems. 4.4 Oracle Experiments Based on the above analysis, we can see the impact of the bigram set and their weights. The following experiments are designed to demonstrate the best system performance we can achieve if we have access to good quality bigrams and weights. Here we use the information from the reference summary. The first is an oracle experiment, where we use all the bigrams from the reference summaries that are also in the original text. In the ICSI ILP system, the weights are the document frequency from the multiple reference summaries. In our ILP module, we use the term frequency of the bigram. The oracle results are shown in Table 5. We can see these are significantly better than the automatic systems. From Table 5, we notice that ICSI’s ILP performs marginally better than our proposed ILP. We hypothesize that one reason may be that many bigrams in the summary reference only appear once. Table 6 shows the frequency of the bigrams in the summary. Indeed 85% of bigram only appear once ILP System ROUGE-2 Our ILP 0.2124 ICSI ILP 0.2128 Table 5: Oracle experiment: using bigrams and their frequencies in the reference summary as weights. and no bigrams appear more than 9 times. For the majority of the bigrams, our method and the ICSI ILP are the same. For the others, our system has slight disadvantage when using the reference term frequency. We expect the high term frequency may need to be properly smoothed/normalized. Freq 1 2 3 4 5 6 7 8 9 Ave# 277 32 7.5 3.2 1.1 0.3 0.1 0.1 0.04 Table 6: Average number of bigrams for each term frequency in one topic’s reference summary. We also treat the oracle results as the gold standard for extractive summarization and compared how the two automatic summarization systems differ at the sentence level. This is different from the results in Table 1, which are the ROUGE results comparing to human written abstractive summaries at the n-gram level. We found that among the 188 sentences in this gold standard, our system hits 31 and ICSI only has 23. This again shows that our system has better performance, not just at the word level based on ROUGE measures, but also at the sentence level. There are on average 3 different sentences per topic between these two results. In the second experiment, after we obtain the estimated Nb,ref for every bigram in the selected sentences from our regression model, we only keep those bigrams that are in the reference summary, and use the estimated weights for both ILP modules. Table 7 shows the results. We can consider these as the upper bound the system can achieve if we use the automatically estimated weights for the correct bigrams. In this experiment ICSI ILP’s performance still performs better than ours. This might be attributed to the fact there is less noise (all the bigrams are the correct ones) and thus the ICSI ILP system performs well. We can see that these results are worse than the previous oracle experiments, but are better than using the automatically generated bigrams, again showing the bigram and weight estimation is critical for 1009 summarization. # Weight ILP ROUGE-2 1 Estimated value Proposed 0.1888 2 ICSI 0.1942 Table 7: Summarization results when using the estimated weights and only keeping the bigrams that are in the reference summary. 4.5 Effect of Training Set Since our method uses supervised learning, we conduct the experiment to show the impact of training size. In TAC’s data, each topic has two sets of documents. For set A, the task is a standard summarization, and there are 4 reference summaries, each 100 words long; for set B, it is an update summarization task – the summary includes information not mentioned in the summary from set A. There are also 4 reference summaries, with 400 words in total. Table 8 shows the results on 2009 data when using the data from different years and different sets for training. We notice that when the training data only contains set A, the performance is always better than using set B or the combined set A and B. This is not surprising because of the different task definition. Therefore, for the rest of the study on data size impact, we only use data set A from the TAC data and the DUC data as the training set. In total there are about 233 topics from the two years’ DUC data (06, 07) and three years’ TAC data (08, 10, 11). We incrementally add 20 topics every time (from DUC06 to TAC11) and plot the learning curve, as shown in Fig 3. As expected, more training data results in better performance. Training Set # Topics ROUGE-2 08 Corpus (A) 48 0.1192 08 Corpus( B) 48 0.1178 08 Corpus (A+B) 96 0.1188 10 Corpus (A) 46 0.1174 10 Corpus (B) 46 0.1167 10 Corpus (A+B) 92 0.1170 11 Corpus (A) 44 0.1157 11 Corpus (B) 44 0.1130 11 Corpus (A+B) 88 0.1140 Table 8: Summarization performance when using different training corpora. 0.112 0.113 0.114 0.115 0.116 0.117 0.118 0.119 0.12 0.121 0.122 0.123 0.124 0.125 20 40 60 80 100 120 140 160 180 200 220 240 Number of trainning topics Rouge-2 Figure 3: Learning curve 4.6 Summary of Analysis The previous experiments have shown the impact of the three factors: the quality of the bigrams themselves, the weights used for these bigrams, and the ILP module. We found that the bigrams and their weights are critical for both the ILP setups. However, there is negligible difference between the two ILP methods. An important part of our system is the supervised method for bigram and weight estimation. We have already seen for the previous ILP method, when using our bigrams together with the weights, better performance can be achieved. Therefore we ask the question whether this is simply because we use supervised learning, or whether our proposed regression model is the key. To answer this, we trained a simple supervised binary classifier for bigram prediction (positive means that a bigram appears in the summary) using the same set of features as used in our bigram weight estimation module, and then used their document frequency in the ICSI ILP system. The result for this method is 0.1128 on the TAC 2009 data. This is much lower than our result. We originally expected that using the supervised method may outperform the unsupervised bigram selection which only uses term frequency information. Further experiments are needed to investigate this. From this we can see that it is not just the supervised methods or using annotated data that yields the overall improved system performance, but rather our proposed regression setup for bigrams is the main reason. 5 Related Work We briefly describe some prior work on summarization in this section. Unsupervised methods have been widely used. In particular, recently several optimization approaches have demonstrated 1010 competitive performance for extractive summarization task. Maximum marginal relevance (MMR) (Carbonell and Goldstein, 1998) uses a greedy algorithm to find summary sentences. (McDonald, 2007) improved the MMR algorithm to dynamic programming. They used a modified objective function in order to consider whether the selected sentence is globally optimal. Sentencelevel ILP was also first introduced in (McDonald, 2007), but (Gillick and Favre, 2009) revised it to concept-based ILP. (Woodsend and Lapata, 2012) utilized ILP to jointly optimize different aspects including content selection, surface realization, and rewrite rules in summarization. (Galanis et al., 2012) uses ILP to jointly maximize the importance of the sentences and their diversity in the summary. (Berg-Kirkpatrick et al., 2011) applied a similar idea to conduct the sentence compression and extraction for multiple document summarization. (Jin et al., 2010) made a comparative study on sentence/concept selection and pairwise and list ranking algorithms, and concluded ILP performed better than MMR and the diversity penalty strategy in sentence/concept selection. Other global optimization methods include submodularity (Lin and Bilmes, 2010) and graph-based approaches (Erkan and Radev, 2004; Leskovec et al., 2005; Mihalcea and Tarau, 2004). Various unsupervised probabilistic topic models have also been investigated for summarization and shown promising. For example, (Celikyilmaz and Hakkani-T¨ur, 2011) used it to model the hidden abstract concepts across documents as well as the correlation between these concepts to generate topically coherent and non-redundant summaries. (Darling and Song, 2011) applied it to separate the semantically important words from the lowcontent function words. In contrast to these unsupervised approaches, there are also various efforts on supervised learning for summarization where a model is trained to predict whether a sentence is in the summary or not. Different features and classifiers have been explored for this task, such as Bayesian method (Kupiec et al., 1995), maximum entropy (Osborne, 2002), CRF (Galley, 2006), and recently reinforcement learning (Ryang and Abekawa, 2012). (Aker et al., 2010) used discriminative reranking on multiple candidates generated by A* search. Recently, research has also been performed to address some issues in the supervised setup, such as the class data imbalance problem (Xie and Liu, 2010). In this paper, we propose to incorporate the supervised method into the concept-based ILP framework. Unlike previous work using sentencebased supervised learning, we use a regression model to estimate the bigrams and their weights, and use these to guide sentence selection. Compared to the direct sentence-based classification or regression methods mentioned above, our method has an advantage. When abstractive summaries are given, one needs to use that information to automatically generate reference labels (a sentence is in the summary or not) for extractive summarization. Most researchers have used the similarity between a sentence in the document and the abstractive summary for labeling. This is not a perfect process. In our method, we do not need to generate this extra label for model training since ours is based on bigrams – it is straightforward to obtain the reference frequency for bigrams by simply looking at the reference summary. We expect our approach also paves an easy way for future automatic abstractive summarization. One previous study that is most related to ours is (Conroy et al., 2011), which utilized a Naive Bayes classifier to predict the probability of a bigram, and applied ILP for the final sentence selection. They used more features than ours, whereas we use a discriminatively trained regression model and a modified ILP framework. Our proposed method performs better than their reported results in TAC 2011 data. Another study closely related to ours is (Davis et al., 2012), which leveraged Latent Semantic Analysis (LSA) to produce term weights and selected summary sentences by computing an approximate solution to the Budgeted Maximal Coverage problem. 6 Conclusion and Future Work In this paper, we leverage the ILP method as a core component in our summarization system. Different from the previous ILP summarization approach, we propose a supervised learning method (a discriminatively trained regression model) to determine the importance of the bigrams fed to the ILP module. In addition, we revise the ILP to maximize the bigram gain (which is expected to be highly correlated with ROUGE-2 scores) rather than the concept/bigram coverage. Our proposed method yielded better results than the previous state-of-the-art ILP system on different TAC data 1011 sets. From a series of experiments, we found that there is little difference between the two ILP modules, and that the improved system performance is attributed to the fact that our proposed supervised bigram estimation module can successfully gather the important bigram and assign them appropriate weights. There are several directions that warrant further research. We plan to consider the context of bigrams to better predict whether a bigram is in the reference summary. We will also investigate the relationship between concepts and sentences, which may help move towards abstractive summarization. Acknowledgments This work is partly supported by DARPA under Contract No. HR0011-12-C-0016 and FA875013-2-0041, and NSF IIS-0845484. Any opinions expressed in this material are those of the authors and do not necessarily reflect the views of DARPA or NSF. References Ahmet Aker and Robert Gaizauskas. 2009. Summary generation for toponym-referencedimages using object type language models. In Proceedings of the International Conference RANLP. Ahmet Aker, Trevor Cohn, and Robert Gaizauskas. 2010. Multi-document summarization using a* search and discriminative training. In Proceedings of the EMNLP. Taylor Berg-Kirkpatrick, Dan Gillick, and Dan Klein. 2011. Jointly learning to extract and compress. In Proceedings of the ACL. Ronald Brandow, Karl Mitze, and Lisa F. Rau. 1995. Automatic condensation of electronic publications by sentence selection. Inf. Process. Manage. Jaime Carbonell and Jade Goldstein. 1998. The use of mmr, diversity-based reranking for reordering documents and producing summaries. In Proceedings of the SIGIR. Asli Celikyilmaz and Dilek Hakkani-T¨ur. 2011. Discovery of topically coherent sentences for extractive summarization. In Proceedings of the ACL. John M. Conroy, Judith D. Schlesinger, Jeff Kubina, Peter A. Rankel, and Dianne P. O’Leary. 2011. Classy 2011 at tac: Guided and multi-lingual summaries and evaluation metrics. In Proceedings of the TAC. William M. Darling and Fei Song. 2011. Probabilistic document modeling for syntax removal in text summarization. In Proceedings of the ACL. Sashka T. Davis, John M. Conroy, and Judith D. Schlesinger. 2012. Occams - an optimal combinatorial covering algorithm for multi-document summarization. In Proceedings of the ICDM. H. P. Edmundson. 1969. New methods in automatic extracting. J. ACM. G¨unes Erkan and Dragomir R. Radev. 2004. Lexrank: graph-based lexical centrality as salience in text summarization. J. Artif. Int. Res. Dimitrios Galanis, Gerasimos Lampouras, and Ion Androutsopoulos. 2012. Extractive multi-document summarization with integer linear programming and support vector regression. In Proceedings of the COLING. Michel Galley. 2006. A skip-chain conditional random field for ranking meeting utterances by importance. In Proceedings of the EMNLP. Dan Gillick and Benoit Favre. 2009. A scalable global model for summarization. In Proceedings of the Workshop on Integer Linear Programming for Natural Langauge Processing on NAACL. Dan Gillick, Benoit Favre, and Dilek Hakkani-T¨ur. 2008. In The ICSI Summarization System at TAC 2008. Feng Jin, Minlie Huang, and Xiaoyan Zhu. 2010. A comparative study on ranking and selection strategies for multi-document summarization. In Proceedings of the COLING. Julian Kupiec, Jan Pedersen, and Francine Chen. 1995. A trainable document summarizer. In Proceedings of the SIGIR. Jure Leskovec, Natasa Milic-Frayling, and Marko Grobelnik. 2005. Impact of linguistic analysis on the semantic graph coverage and learning of document extracts. In Proceedings of the AAAI. Hui Lin and Jeff Bilmes. 2010. Multi-document summarization via budgeted maximization of submodular functions. In Proceedings of the NAACL. Chin-Yew Lin. 2004. Rouge: a package for automatic evaluation of summaries. In Proceedings of the ACL. Ryan McDonald. 2007. A study of global inference algorithms in multi-document summarization. In Proceedings of the European conference on IR research. Rada Mihalcea and Paul Tarau. 2004. Textrank: Bringing order into text. In Proceedings of the EMNLP. Miles Osborne. 2002. Using maximum entropy for sentence extraction. In Proceedings of the ACL-02 Workshop on Automatic Summarization. 1012 Dragomir R. Radev. 2001. Experiments in single and multidocument summarization using mead. In In First Document Understanding Conference. Seonggi Ryang and Takeshi Abekawa. 2012. Framework of automatic text summarization using reinforcement learning. In Proceedings of the EMNLP. Noah A. Smith and Jason Eisner. 2005. Contrastive estimation: training log-linear models on unlabeled data. In Proceedings of the ACL. Kristian Woodsend and Mirella Lapata. 2012. Multiple aspect summarization using integer linear programming. In Proceedings of the EMNLP. Shasha Xie and Yang Liu. 2010. Improving supervised learning for meeting summarization using sampling and regression. Comput. Speech Lang. 1013
2013
99
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1–12, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Learning Ensembles of Structured Prediction Rules Corinna Cortes Google Research 111 8th Avenue, New York, NY 10011 [email protected] Vitaly Kuznetsov Courant Institute 251 Mercer Street, New York, NY 10012 [email protected] Mehryar Mohri Courant Institute and Google Research 251 Mercer Street, New York, NY 10012 [email protected] Abstract We present a series of algorithms with theoretical guarantees for learning accurate ensembles of several structured prediction rules for which no prior knowledge is assumed. This includes a number of randomized and deterministic algorithms devised by converting on-line learning algorithms to batch ones, and a boostingstyle algorithm applicable in the context of structured prediction with a large number of labels. We also report the results of extensive experiments with these algorithms. 1 Introduction We study the problem of learning accurate ensembles of structured prediction experts. Ensemble methods are widely used in machine learning and have been shown to be often very effective (Breiman, 1996; Freund and Schapire, 1997; Smyth and Wolpert, 1999; MacKay, 1991; Freund et al., 2004). However, ensemble methods and their theory have been developed primarily for binary classification or regression tasks. Their techniques do not readily apply to structured prediction problems. While it is straightforward to combine scalar outputs for a classification or regression problem, it is less clear how to combine structured predictions such as phonemic pronunciation hypotheses, speech recognition lattices, parse trees, or alternative machine translations. Consider for example the problem of devising an ensemble method for pronunciation, a critical component of modern speech recognition (Ghoshal et al., 2009). Often, several pronunciation models or experts are available for transcribing words into sequences of phonemes. These models may have been derived using other machine learning algorithms or they may be based on carefully hand-crafted rules. In general, none of these pronunciation experts is fully accurate and each expert may be making mistakes at different positions along the output sequence. One can hope that a model that patches together the pronunciation of different experts could achieve a superior performance. Similar ensemble structured prediction problems arise in other tasks, including machine translation, part-of-speech tagging, optical character recognition and computer vision, with structures or substructures varying with each task. We seek to tackle all of these problems simultaneously and consider the general setting where the label or output associated to an input x ∈X is a structure y ∈Y that can be decomposed and represented by l substructures y1, . . . , yl. For the pronunciation example just discussed, x is a specific word or word sequence and y its phonemic transcription. A natural choice for the substructures yk is then the individual phonemes forming y. Other possible choices include n-grams of consecutive phonemes or more general subsequences. We will assume that the loss function considered admits an additive decomposition over the substructures, as is common in structured prediction. We also assume access to a set of structured prediction experts h1, . . . , hp that we treat as black boxes. Given an input x ∈X, each expert predicts a structure hj(x) = (h1 j(x), . . . , hl j(x)). The hypotheses hj may be the output of a structured prediction algorithm such as Conditional Random Fields (Lafferty et al., 2001), Averaged Perceptron (Collins, 2002), StructSVM (Tsochantaridis et al., 2005), Max Margin Markov Networks (Taskar et al., 2004) or the Regression Technique for Learning Transductions (Cortes et al., 2005), or some other algorithmic or human expert. Given a labeled training sample (x1, y1), . . . , (xm, ym), our objective is to use the predictions of these experts 1 to form an accurate ensemble. Variants of the ensemble problem just formulated have been studied in the past in the natural language processing and machine learning literature. One of the most recent, and possibly most relevant studies for sequence data is that of (Nguyen and Guo, 2007), which is based on the forward stepwise selection introduced by (Caruana et al., 2004). However, one disadvantage of this greedy approach is that it can be proven to fail to select an optimal ensemble of experts even in favorable cases where a specialized expert is available for each local prediction (Cortes et al., 2014a). Ensemble methods for structured prediction based on bagging, random forests and random subspaces have also been proposed in (Kocev et al., 2013). One of the limitations of this work is that it is applicable only to a very specific class of treebased experts introduced in that paper. Similarly, a boosting approach was developed in (Wang et al., 2007) but it applies only to local experts. In the context of natural language processing, a variety of different re-ranking techniques have been proposed for somewhat related problems (Collins and Koo, 2005; Zeman and ˇZabokrtsk´y, 2005; Sagae and Lavie, 2006; Zhang et al., 2009). But, reranking methods do not combine predictions at the level of substructures, thus the final prediction of the ensemble coincides with the prediction made by one of the experts, which can be shown to be suboptimal in many cases. Furthermore, these methods typically assume the use of probabilistic models, which is not a requirement in our learning scenario. Other ensembles of probabilistic models have also been considered in text and speech processing by forming a product of probabilistic models via the intersection of lattices (Mohri et al., 2008), or a straightforward combination of the posteriors from probabilistic grammars trained using EM with different starting points (Petrov, 2010), or some other rather intricate techniques in speech recognition (Fiscus, 1997). Finally, an algorithm of (MacKay, 1997) is another example of an ensemble method for structured prediction though it is not addressing directly the problem we are considering. Most of the references just mentioned do not give a rigorous theoretical justification for the techniques proposed. We are not aware of any prior theoretical analysis for the ensemble structured prediction problem we consider. Here, we present two families of algorithms for learning ensembles of structured prediction rules that both perform well in practice and enjoy strong theoretical guarantees. In Section 3, we develop ensemble methods based on on-line algorithms. To do so, we extend existing on-line-to-batch conversions to our more general setting. In Section 4, we present a new boosting-style algorithm which is applicable even with a large set of classes as in the problem we consider, and for which we present margin-based learning guarantees. Section 5 reports the results of our extensive experiments.1 2 Learning scenario As in standard supervised learning problems, we assume that the learner receives a training sample S = ((x1, y1), . . . , (xm, ym)) ∈X × Y of m labeled points drawn i.i.d. according to the some distribution D used both for training and testing. We also assume that the learner has access to a set of p predictors h1, . . . , hp mapping X to Y to devise an accurate ensemble prediction. Thus, for any input x ∈X, he can use the prediction of the p experts h1(x), . . . , hp(x). No other information is available to the learner about these p experts, in particular the way they have been trained or derived is not known to the learner. But, we will assume that the training sample S is distinct from what may have been used for training the algorithms that generated h1(x), . . . , hp(x). To simplify our analysis, we assume that the number of substructures l ≥1 is fixed. This does not cause any loss of generality so long as the maximum number of substructures is bounded, which is the case in all the applications we consider. The quality of the predictions is measured by a loss function L: Y × Y →R+ that can be decomposed as a sum of loss functions ℓk : Yk → R+ over the substructure sets Yk, that is, for all y = (y1, . . . , yl) ∈Y with yk ∈Yk and y′ = (y′1, . . . , y′l) ∈Y with y′k ∈Yk, L(y, y′) = l X k=1 ℓk(yk, y′k). (1) We will assume in all that follows that the loss function L is bounded: L(y, y′) ≤M for all 1This paper is a modified version of (Cortes et al., 2014a) to which we refer the reader for the proofs of the theorems stated and a more detailed discussion of our algorithms. 2 (y, y′) for some M > 0. A prototypical example of such loss functions is the normalized Hamming loss LHam, which is the fraction of substructures for which two labels y and y′ disagree, thus in that case ℓk(yk, y′k) = 1 l Iyk̸=y′k and M = 1. 3 On-line learning approach In this section, we present an on-line learning solution to the ensemble structured prediction problem just discussed. We first give a new formulation of the problem as that of on-line learning with expert advice, where the experts correspond to the paths of an acyclic automaton. The on-line algorithm generates at each iteration a distribution over the path-experts. A critical component of our approach consists of using these distributions to define a prediction algorithm with favorable generalization guarantees. This requires an extension of the existing on-line-to-batch conversion techniques to the more general case of combining distributions over path-experts, as opposed to combining single hypotheses. 3.1 Path experts Each expert hj induces a set of substructure hypotheses h1 j, . . . , hl j. As already discussed, one particular expert may be better at predicting the kth substructure while some other expert may be more accurate at predicting another substructure. Therefore, it is desirable to combine the substructure predictions of all experts to derive a more accurate prediction. This leads us to considering an acyclic finite automaton G such as that of Figure 1 which admits all possible sequences of substructure hypotheses, or, more generally, a finite automaton such as that of Figure 2 which only allows a subset of these sequences. An automaton such as G compactly represents a set of path experts: each path from the initial vertex 0 to the final vertex l is labeled with a sequence of substructure hypotheses h1 j1, . . . , hl jl and defines a hypothesis which associates to input x the output h1 j1(x) · · · hl jl(x). We will denote by H the set of all path experts. We also denote by h each path expert defined by h1 j1, . . . , hl jl, with jk ∈{1, . . . , p}, and denote by hk its kth substructure hypothesis hk jk. Our ensemble structure prediction problem can then be formulated as that of selecting the best path expert (or collection of                      Figure 1: Finite automaton G of path experts. path experts) in G. Note that, in general, the path expert selected does not coincide with any of the original experts h1, . . . , hp. 3.2 On-line algorithm Using an automaton G, the size of the pool of experts H we consider can be very large. For example, in the case of the automaton of Figure 1, the size of the pool of experts is pl, and thus is exponentially large with respect to p. But, since learning guarantees in on-line learning admit only a logarithmic dependence on that size, they remain informative in this context. Nevertheless, the computational complexity of most on-line algorithms also directly depends on that size, which could make them impractical in this context. But, there exist several on-line solutions precisely designed to address this issue by exploiting the structure of the experts as in the case of our path experts. These include the algorithm of (Takimoto and Warmuth, 2003) denoted by WMWP, which is an extension of the (randomized) weightedmajority (WM) algorithm of (Littlestone and Warmuth, 1994) to more general bounded loss functions combined with the Weight Pushing (WP) algorithm of (Mohri, 1997); and the Follow the Perturbed Leader (FPL) algorithm of (Kalai and Vempala, 2005). The WMWP algorithm admits a more favorable regret guarantee than the FPL algorithm in our context and our discussion will focus on the use of WMWP for the design of our batch algorithm. However, we have also fully analyzed and implemented a batch algorithm based on FPL (Cortes et al., 2014a). As in the standard WM algorithm (Littlestone and Warmuth, 1994), WMWP maintains at each round t ∈[1, T], a distribution pt over the set of all experts, which in this context are the path experts h ∈H. At each round t ∈[1, T], the algorithm receives an input sequence xt, incurs the loss Eh∼pt[L(h(xt), yt)] = P h pt(h)L(h(xt), yt) and multiplicatively updates the distribution weight per expert: ∀h ∈H, pt+1(h)= pt(h)βL(h(xt),yt) P h′∈H pt(h′)βL(h′(xt),yt) , (2) 3                        Figure 2: Alternative experts automaton. where β ∈(0, 1) is some fixed parameter. The number of paths is exponentially large in p and the cost of updating all paths is therefore prohibitive. However, since the loss function is additive in the substructures and the updates are multiplicative, it suffices to maintain instead a weight wt(e) per transition e, following the update wt+1(e)= wt(e)βℓe(xt,yt) P orig(e′)=orig(e) wt(e′)βℓe′(xt,yt) (3) where ℓe(xt, yt) denotes the loss incurred by the substructure predictor labeling e for the input xt and output yt, and orig(e′) denotes the origin state of a transition e′ (Takimoto and Warmuth, 2003). Thus, the cost of the update is then linear in the size of the automaton. To use the resulting weighted automaton for sampling, the weight pushing algorithm is used, whose complexity is also linear in the size of the automaton (Mohri, 1997). 3.3 On-line-to-batch conversion The WMWP algorithm does not produce a sequence of path experts, rather, a sequence of distributions p1, . . . , pT over path experts. Thus, the on-line-to-batch conversion techniques described in (Littlestone, 1989; Cesa-Bianchi et al., 2004; Dekel and Singer, 2005) do not readily apply. Instead, we propose a generalization of the techniques of (Dekel and Singer, 2005). The conversion consists of two steps: extract a good collection of distributions P ⊆{p1, . . . , pT }; next use P to define an accurate hypothesis for prediction. For a subset P ⊆{p1, . . . , pT }, we define Γ(P)= 1 |P| X pt∈P X h∈H pt(h)L(h(xt), yt)+M s log 1 δ |P| = 1 |P| X pt∈P X e wt(e)ℓe(xt), yt)+M s log 1 δ |P| , where δ > 0 is a fixed parameter. With this definition, we choose Pδ as a minimizer of Γ(P) over some collection P of subsets of {p1, . . . , pT }: Pδ ∈argminP∈P Γ(P). The choice of P is restricted by computational considerations. One natural option is to let P be the union of the suffix sets {pt, . . . , pT }, t = 1, . . . , T. We will assume in what follows that P includes the set {p1, . . . , pT }. Next, we define a randomized algorithm based on Pδ. Given an input x, the algorithm consists of randomly selecting a path h according to p(h) = 1 |Pδ| X pt∈Pδ pt(h), (4) and returning the prediction h(x). Note that computing and storing p directly is not efficient. To sample from p, we first choose pt ∈Pδ uniformly at random and then sample a path h according to that pt. Sampling a path according to pt can be done efficiently using the weight pushing algorithm. Note that once an input x is received, the distribution p over the path experts h induces a probability distribution px over the output space Y. It is not hard to see that sampling a prediction y according to px is statistically equivalent to first sampling h according to p and then predicting h(x). We will denote by HRand the randomized hypothesis thereby generated. An inherent drawback of randomized solutions such as the one just described is that for the same input x the user can receive different predictions over time. Randomized solutions are also typically more costly to store. A collection of distributions P can also be used to define a deterministic prediction rule based on the scoring function approach. The majority vote scoring function is defined by ehMVote(x, y) = lY k=1  1 |Pδ| X pt∈Pδ p X j=1 wt,kj1hk j (x)=yk  . (5) The majority vote algorithm denoted by HMVote is then defined for all x ∈X, by HMVote(x) = argmaxy∈Y ehMVote(x, y). For an expert automaton accepting all path experts such as that of Figure 1, the maximizer of ehMVote can be found very efficiently by choosing y such that yk has the maximum weight in position k. In the next section, we present learning guarantees for HRand and HMVote. For a more extensive dis4 cussion of alternative prediction rules, see (Cortes et al., 2014a). 3.4 Batch learning guarantees We first present learning bounds for the randomized prediction rule HRand. Next, we upper bound the generalization error of HMVote in terms of that of HRand. Theorem 1. For any δ > 0, with probability at least 1 −δ over the choice of the sample ((x1, y1), . . . , (xT , yT )) drawn i.i.d. according to D, the following inequalities hold: E[L(HRand(x), y)]≤inf h∈HE[L(h(x), y)] + 2M r l log p T + 2M s log 2 δ T . For the normalized Hamming loss LHam, the bound of Theorem 1 holds with M = 1. We now upper bound the generalization error of the majority-vote algorithm HMVote in terms of that of the randomized algorithm HRand, which, combined with Theorem 1, immediately yields generalization bounds for the majority-vote algorithm HMVote. Proposition 2. The following inequality relates the generalization error of the majority-vote algorithm to that of the randomized one: E[LHam(HMVote(x),y)]≤2 E[LHam(HRand(x),y)], where the expectations are taken over (x, y) ∼D and h∼p. Proposition 2 suggests that the price to pay for derandomization is a factor of 2. More refined and more favorable guarantees can be proven for the majority-vote algorithm (Cortes et al., 2014a). 4 Boosting-style algorithm In this section, we devise a boosting-style algorithm for our ensemble structured prediction problem. The variants of AdaBoost for multiclass classification such as AdaBoost.MH or AdaBoost.MR (Freund and Schapire, 1997; Schapire and Singer, 1999; Schapire and Singer, 2000) cannot be readily applied in this context. First, the number of classes to consider here is quite large, as in all structured prediction problems, since it is exponential in the number of substructures l. For example, in the case of the pronunciation problem where the number of phonemes for English is in the order of 50, the number of classes is 50l. But, the objective function for AdaBoost.MH or AdaBoost.MR as well as the main steps of the algorithms include a sum over all possible labels, whose computational cost in this context would be prohibitive. Second, the loss function we consider is the normalized Hamming loss over the substructures predictions, which does not match the multiclass losses for the variants of AdaBoost.2 Finally, the natural base hypotheses for this problem admit a structure that can be exploited to devise a more efficient solution, which of course was not part of the original considerations for the design of these variants of AdaBoost. 4.1 Hypothesis sets The predictor HBoost returned by our boosting algorithm is based on a scoring function eh: X × Y →R, which, as for standard ensemble algorithms such as AdaBoost, is a convex combination of base scoring functions eht: eh = PT t=1 αteht, with αt ≥0. The base scoring functions used in our algorithm have the form ∀(x, y) ∈X × Y, eht(x, y) = l X k=1 ehk t (x, y). In particular, these can be derived from the path experts in H by letting hk t (x, y) = 1hk t (x)=yk. Thus, the score assigned to y by the base scoring function eht is the number of positions at which y matches the prediction of path expert ht given input x. HBoost is defined as follows in terms of eh or hts: ∀x ∈X, HBoost(x) = argmax y∈Y eh(x, y) We remark that the analysis and algorithm presented in this section are also applicable with a scoring function that is the product of the scores 2(Schapire and Singer, 1999) also present an algorithm using the Hamming loss for multi-class classification, but that is a Hamming loss over the set of classes and differs from the loss function relevant to our problem. Additionally, the main steps of that algorithm are also based on a sum over all classes. 5 at each substructure k as opposed to a sum, that is, eh(x, y) = lY k=1 T X t=1 αtehk t (x, y) ! . This can be used for example in the case where the experts are derived from probabilistic models. 4.2 ESPBoost algorithm To simplify our exposition, the algorithm that we now present uses base learners of the form hk t (x, y) = 1hk t (x)=yk. The general case can be handled in the same fashion with the only difference being the definition of the direction and step of the optimization procedure described below. For any i ∈[1, m] and k ∈[1, l], we define the margin of ehk for point (xi, yi) by ρ(ehk, xi, yi) = ehk(xi, yk i )−maxyk̸=yk i ehk(xi, yk). We first derive an upper bound on the empirical normalized Hamming loss of a hypothesis HBoost, with eh = PT t=1 αteht. Lemma 3. The following upper bound holds for the empirical normalized Hamming loss of the hypothesis HBoost: E (x,y)∼S[LHam(HBoost(x), y)] ≤1 ml m X i=1 l X k=1 exp  − T X t=1 αtρ(ehk t , xi, yi)  . The proof of this lemma as well as that of several other theorems related to this algorithm can be found in (Cortes et al., 2014a). In view of this upper bound, we consider the objective function F : RN →R defined for all α = (α1, . . . , αN) ∈RN by F(α) = 1 ml m X i=1 l X k=1 exp  − N X j=1 αjρ(ehk j , xi, yi)  , where h1, . . . , hN denote the set of all path experts in H. F is a convex and differentiable function of α. Our algorithm, ESPBoost (Ensemble Structured Prediction Boosting), is defined by the application of coordinate descent to the objective F. Algorithm 1 shows the pseudocode of the ESPBoost. Algorithm 1 ESPBoost Algorithm Inputs: S = ((x1, y1), . . . , (xm, ym)); set of experts {h1, . . . , hp} for i = 1 to m and k = 1 to l do D1(i, k) ← 1 ml end for for t = 1 to T do ht ←argminh∈H E(i,k)∼Dt[1hk(xi)̸=yk i ] ϵt ←E(i,k)∼Dt[1hk t (xi)̸=yk i ] αt ←1 2 log 1−ϵt ϵt Zt ←2 p ϵt(1 −ϵt) for i = 1 to m and k = 1 to l do Dt+1(i, k) ←exp(−αtρ(ehk t ,xi,yi))Dt(i,k) Zt end for end for Return eh = PT t=1 αteht Let αt−1 ∈RN denote the vector obtained after t −1 iterations and et the tth unit vector in RN. We denote by Dt the distribution over [1, m]×[1, l] defined by Dt(i, k) = 1 ml exp  −Pt−1 u=1 αuρ(ehk u, xi, yi)  At−1 where At−1 is a normalization factor, At−1 = 1 ml Pm i=1 Pl k=1 exp −Pt−1 u=1 αuρ(ehk u, xi, yi)  . The direction et selected at the tth round is the one minimizing the directional derivative, that is dF(αt−1 + ηet) dη η=0 =− m X i=1 l X k=1 ρ(ehk t , xi, yi)Dt(i, k)At−1 =  2 X i,k:hk t (xi)̸=yk i Dt(i, k) −1  At−1 =(2ϵt −1)At−1, where ϵt is the average error of ht given by ϵt = m X i=1 l X k=1 Dt(i, k)1hk t (xi)̸=yk i = E (i,k)∼Dt [1hk t (xi)̸=yk i ]. The remaining steps of our algorithm can be determined as in the case of AdaBoost. In particular, given the direction et, the best step αt is obtained by solving the equation dF(αt−1+αtet) dαt = 6 0, which admits the closed-form solution αt = 1 2 log 1−ϵt ϵt . The distribution Dt+1 can be expressed in terms of Dt with the normalization factor Zt = 2 p ϵt(1 −ϵt). Our weak learning assumption in this context is that there exists γ > 0 such that at each round, ϵt verifies ϵt < 1 2 −γ. Note that, at each round, the path expert ht with the smallest error ϵt can be determined easily and efficiently by first finding for each substructure k, the hk t that is the best with respect to the distribution weights Dt(i, k). Observe that, while the steps of our algorithm are syntactically close to those of AdaBoost and its multi-class variants, our algorithm is distinct and does not require sums over the exponential number of all possible labelings of the substructures and is quite efficient. 4.3 Learning guarantees We have derived both a margin-based generalization bound in support of the ESPBoost algorithm and a bound on the empirical margin loss. For any ρ > 0, define the empirical margin loss of HBoost by the following: bRρ  eh ∥α∥1  = 1 ml m X i=1 l X k=1 1ρ(ehk,xi,yi)≤ρ∥α∥1, where eh is the corresponding scoring function. The following theorem can be proven using the multi-class classification bounds of (Koltchinskii and Panchenko, 2002; Mohri et al., 2012) as can be shown in (Cortes et al., 2014a). Theorem 4. Let F denote the set of functions HBoost with eh = PT t=1 αteht for some α1, . . . , αt ≥0 and ht ∈H for all t ∈[1, T]. Fix ρ > 0. Then, for any δ > 0, with probability at least 1−δ, the following holds for all HBoost ∈F: E (x,y)∼D[LHam(HBoost(x), y)] ≤bRρ  eh ∥α∥1  + 2 ρl l X k=1 |Yk|2Rm(Hk) + s log l δ 2m , where Rm(Hk) denotes the Rademacher complexity of the class of functions Hk = {x 7→ehk t : j ∈[1, p], y ∈Yk}. Table 1: Average Normalized Hamming Loss, ADS1 and ADS2. βADS1 = 0.95, βADS2 = 0.95, TSLE = 100, δ = 0.05. ADS1, m = 200 ADS2, m = 200 HMVote 0.0197 ± 0.00002 0.2172 ± 0.00983 HFPL 0.0228 ± 0.00947 0.2517 ± 0.05322 HCV 0.0197 ± 0.00002 0.2385 ± 0.00002 HFPL-CV 0.0741 ± 0.04087 0.4001 ± 0.00028 HESPBoost 0.0197 ± 0.00002 0.2267 ± 0.00834 HSLE 0.5641 ± 0.00044 0.2500 ± 0.05003 HRand 0.1112 ± 0.00540 0.4000 ± 0.00018 Best hj 0.5635 ± 0.00004 0.4000 This theorem provides a margin-based guarantee for convex ensembles such as those returned by ESPBoost. The following theorem further provides an upper bound on the empirical margin loss for ESPBoost. Theorem 5. Let eh denote the scoring function returned by ESPBoost after T ≥1 rounds. Then, for any ρ > 0, the following inequality holds: bRρ  eh ∥α∥1  ≤2T T Y t=1 q ϵ1−ρ t (1 −ϵt)1+ρ. As in the case of AdaBoost (Schapire et al., 1997), it can be shown that for ρ < γ, ϵ1−ρ t (1 −ϵt)1+ρ ≤ (1 −2γ)1−ρ(1 + 2γ)1+ρ < 1 and the right-hand side of this bound decreases exponentially with T. 5 Experiments We used a number of artificial and real-world data sets for our experiments. For each data set, we performed 10-fold cross-validation with disjoint training sets.3 We report the average error for each task. In addition to the HMVote, HRand and HESPBoost hypotheses, we experimented with two algorithms discussed in more detail in (Cortes et al., 2014a): a cross-validation on-line-tobatch conversion of the WMWP algorithm, HCV, a majority-vote on-line-to-batch conversion with FPL, HFPL, and a cross-validation on-line-tobatch conversion with FPL, HFPL-CV. Finally, we compare with the HSLE algorithm of (Nguyen and Guo, 2007). 5.1 Artificial data sets Our artificial data set, ADS1 and ADS2 simulate the scenarios described in Section 1. In ADS1 the 3For the OCR data set, these subsets are predefined. 7 kth expert has a high accuracy on the kth position, in ADS2 an expert has low accuracy in a fixed set of positions. For the first artificial data set, ADS1, we used local experts h1, . . . , hp with p = 5. To generate the data we chose an arbitrary Markov chain over the English alphabet and sampled 40,000 random sequences each consisting of 10 symbols. Each of the five experts was designed to have a certain probability of making a mistake at each position in the sequence. Expert hj correctly predicted positions 2j −1 and 2j with probability 0.97 and other positions with probability 0.5. We forced experts to make similar mistakes by making them select an adjacent alphabet symbol in case of an error. For example, when a mistake was made on a symbol b, the expert prediction was forced to be either a or c. The second artificial data set, ADS2, modeled the case of rather poor experts. ADS2 was generated in the same way as ADS1, but the expert predictions were different. This time each expert made mistakes at four out of the ten distinct random positions in each sequence. Table 1 reports the results of our experiments. For all experiments with the algorithms HRand, HMVote, and HCV, we ran the WMWP algorithm for T = m rounds with the β values listed in the caption of Table 1, generating distributions P ⊆{p1, . . . , pT }. For P we used the collection of all suffix sets {pt, . . . , pT } and δ = 0.05. For the algorithms based on FPL, we used ϵ = 0.5/pl. The same parameter choices were used for the subsequent experiments. As can be seen from Table 1, in both cases, HMVote, our majority-vote algorithm based on our on-line-to-batch conversion using the WMWP algorithm (together with most of the other on-line based algorithms), yields a significant improvement over the best expert. It also outperforms HSLE, which in the case of ADS1 even fails to outperform the best hj. After 100 iterations on ADS1, the ensemble learned by HSLE consists of a single expert, which is why it leads to such a poor performance. It is also worth pointing out that HFPL-CV and HRand fail to outperform the best model on ADS2 set. This is in total agreement with our theoretical analysis since, in this case, any path expert has exactly the same performance and the error of the Table 2: Average Normalized Hamming Loss for ADS3. βADS1 = 0.95, βADS2 = 0.95, TSLE = 100, δ = 0.05. HMVote 0.1788 ± 0.00004 HFPL 0.2189 ± 0.04097 HCV 0.1788 ± 0.00004 HFPL-CV 0.3148 ± 0.00387 HESPBoost 0.1831 ± 0.00240 HSLE 0.1954 ± 0.00185 HRand 0.3196 ± 0.00018 Best hj 0.2957 ± 0.00005 Table 3: Average Normalized Hamming Loss, PDS1 and PDS2. βPDS1 = 0.85, βPDS2 = 0.97, TSLE = 100, δ = 0.05. PDS1, m = 130 PDS2, m = 400 HMVote 0.2225 ± 0.00301 0.2323 ± 0.00069 HFPL 0.2657 ± 0.07947 0.2337 ± 0.00229 HCV 0.2316 ± 0.00189 0.2364 ± 0.00080 HFPL-CV 0.4451 ± 0.02743 0.4090 ± 0.01388 HESPBoost 0.3625 ± 0.01054 0.3499 ± 0.00509 HSLE 0.3130 ± 0.05137 0.3308 ± 0.03182 HRand 0.4713 ± 0.00360 0.4607 ± 0.00131 Best hj 0.3449 ± 0.00368 0.3413 ± 0.00067 best path expert is an asymptotic upper bound on the errors of these algorithms. The superior performance of the majority-vote-based algorithms suggests that these algorithms may have an advantage over other prediction rules beyond what is suggested by our learning bounds. We also synthesized a third data set, ADS3. Here, we simulated the case where each expert specialized in predicting some subset of the labels. In particular, we generated 40,000 random sequences over the English alphabet in the same way as for ADS1 and ADS2. To generate expert predictions, we partitioned the alphabet into 5 disjoint subsets Aj. Expert j always correctly predicted the label in Aj and the probability of correctly predicting the label not in Aj was set to 0.7. To train the ensemble algorithms, we used a training set of size m = 200. The results are presented in Table 2. HMVote, HCV and HESPBoost achieve the best performance on this data set with a considerable improvement in accuracy over the best expert hj. We also observe as for the ADS2 experiment that HRand and HFPL-CV fail to outperform the best model and approach the accuracy of the best path expert only asymptotically. 8 Table 4: Average edit distance, PDS1 and PDS2. βPDS1 = 0.85, βPDS2 = 0.97, TSLE = 100, δ = 0.05. PDS1, m = 130 PDS2, m = 400 HMVote 0.8395 ± 0.01076 0.9626 ± 0.00341 HFPL 1.0158 ± 0.34379 0.9744 ± 0.01277 HCV 0.8668 ± 0.00553 0.9840 ± 0.00364 HFPL-CV 1.8044 ± 0.09315 1.8625 ± 0.06016 HESPBoost 1.3977 ± 0.06017 1.4092 ± 0.04352 HSLE 1.1762 ± 0.12530 1.2477 ± 0.12267 HRand 1.8962 ± 0.01064 2.0838 ± 0.00518 Best hj 1.2163 ± 0.00619 1.2883 ± 0.00219 5.2 Pronunciation data sets We had access to two proprietary pronunciation data sets, PDS1 and PDS2. In both sets, each example is an English word, typically a proper name. For each word, 20 possible phonemic sequences are available, ranked by some pronunciation model. Since the true pronunciation was not available, we set the top sequence to be the target label and used the remaining as the predictions made by the experts. The only difference between PDS1 and PDS2 is their size: 1,313 words for PDS1 and 6,354 for PDS2. In both cases, on-line based algorithms, specifically HMVote, significantly outperform the best model as well as HSLE, see Table 3. The poor performance of HESPBoost is due to the fact that the weak learning assumption is violated after 5-8 iterations and hence the algorithm terminates. It can be argued that for this task the edit-distance is a more suitable measure of performance than the average Hamming loss. Thus, we also report the results of our experiments in terms of the edit-distance in Table 4. Remarkably, our on-line based algorithms achieve a comparable improvement over the performance of the best model in the case of edit-distance as well. 5.3 OCR data set Rob Kassel’s OCR data set is available for download from http://ai.stanford.edu/˜btaskar/ ocr/. It contains 6,877 word instances with a total of 52,152 characters. Each character is represented by 16 × 8 = 128 binary pixels. The task is to predict a word given its sequence of pixel vectors. To generate experts, we used several software packages: CRFsuite (Okazaki, 2007) and SVMstruct, SVMmulticlass (Joachims, 2008), and Table 5: Average Normalized Hamming Loss, TR1 and TR2. βTR1 = 0.95, βTR2 = 0.98, TSLE = 100, δ = 0.05. TR1, m = 800 TR2, m = 1000 HMVote 0.0850 ± 0.00096 0.0746 ± 0.00014 HFPL 0.0859 ± 0.00110 0.0769 ± 0.00218 HCV 0.0843 ± 0.00006 0.0741 ± 0.00011 HFPL-CV 0.1093 ± 0.00129 0.1550 ± 0.00182 HESPBoost 0.1041 ± 0.00056 0.1414 ± 0.00233 HSLE 0.0778 ± 0.00934 0.0814 ± 0.02558 HRand 0.1128 ± 0.00048 0.1652 ± 0.00077 Best hj 0.1032 ± 0.00007 0.1415 ± 0.00005 the Stanford Classifier (Rafferty et al., 2014). We trained these algorithms on each of the predefined folds of the data set and generated predictions on the test fold using the resulting models. Our results (see (Cortes et al., 2014a)) show that ensemble methods lead only to a small improvement in performance over the best hj. This is because here the best model hj dominates all other experts and ensemble methods cannot benefit from patching together different outputs. 5.4 Penn Treebank data set The part-of-speech task, POS, consists of labeling each word of a sentence with its correct part-of-speech tag. The Penn Treebank 2 data set is available through LDC license at http: //www.cis.upenn.edu/˜treebank/ and contains 251,854 sentences with a total of 6,080,493 tokens and 45 different parts-of-speech. For the first experiment, TR1, we used 4 disjoint training sets to produce 4 SVMmulticlass models and 4 maximum entropy models using the Stanford Classifier. We also used the union of these training sets to devise one CRFsuite model. For the second experiment, TR2, we trained 5 SVMstruct models. The same features were used for both experiments. For the SVM algorithms, we generated 267,214 bag-of-word binary features. The Stanford Classifier and CRFsuite packages use internal routines to generate features. The results of the experiments are summarized in Table 5. For TR1, our on-line ensemble methods improve over the best model. Note that HSLE has the best average loss over 10 runs for this experiment. This comes at a price of much higher standard deviation which does not allow us to conclude that the difference in performance between our methods and HSLE is statistically significant. 9 Table 6: Average Normalized Hamming Loss, SDS. l ≥4, β = 0.97, δ = 0.05, TSLE = 100. p = 5, m = 1500 p = 10, m = 1200 HMVote 0.2465 ± 0.00248 0.2606 ± 0.00320 HFPL 0.2500 ± 0.00248 0.2622 ± 0.00316 HCV 0.2504 ± 0.00576 0.2755 ± 0.00212 HFPL-CV 0.2726 ± 0.00839 0.3219 ± 0.01176 HESPBoost 0.2572 ± 0.00062 0.2864 ± 0.00103 HSLE 0.2572 ± 0.00061 0.2864 ± 0.00102 HRand 0.2877 ± 0.00480 0.3430 ± 0.00468 Best hj 0.2573 ± 0.00060 0.2865 ± 0.00101 In fact, on two runs, HSLE chooses an ensemble consisting of a single expert and fails to outperform the best model. 5.5 Speech recognition data set For our last set of experiments, we used another proprietary speech recognition data set, SDS. Each example in this data set is represented by a sequence of length l ∈[2, 15]. Therefore, for training we padded the true labels and the expert predictions to normalize the sequence lengths. For each of the 22,298 examples, there are between 2 and 251 expert predictions available. Since the ensemble methods we presented assume that the predictions of all p experts are available for each example in the training and test sets, we needed to restrict ourselves to the subsets of the data where at least some fixed number of expert predictions were available. In particular, we considered p = 5, 10, 20 and 50. For each value of p we used only the top p experts in our ensembles. Our initial experiments showed that, as in the case of OCR data set, ensemble methods offer only a modest increase in performance over the best hj. This is again largely due to the dominant performance of the best expert hj. However, it was observed that the accuracy of the best model is a decreasing function of l, suggesting that ensemble algorithm may be used to improve performance for longer sequences. Subsequent experiments show that this is indeed the case: when training and testing with l ≥4, ensemble algorithms outperform the best model. Table 6 and Table 7 summarize these results for p = 5, 10, 20, 50. Our results suggest that the following simple scheme can be used: for short sequences use the best expert model and for longer sequences, use the ensemble model. A more elaborate variant of this algorithm can be derived based on the obserTable 7: Average Normalized Hamming Loss, SDS. l ≥4, β = 0.97,δ = 0.05, TSLE = 100. p = 20, m = 900 p = 50, m = 700 HMVote 0.2773 ± 0.00139 0.3217 ± 0.00375 HFPL 0.2797 ± 0.00154 0.3189 ± 0.00344 HCV 0.2986 ± 0.00075 0.3401 ± 0.00054 HFPL-CV 0.3816 ± 0.01457 0.4451 ± 0.01360 HESPBoost 0.3115 ± 0.00089 0.3426 ± 0.00071 HSLE 0.3114 ± 0.00087 0.3425 ± 0.00076 HRand 0.3977 ± 0.00302 0.4608 ± 0.00303 Best hj 0.3116 ± 0.00087 0.3427 ± 0.00077 vation that the improvement in accuracy of the ensemble model over the best expert increases with the number of experts available. 6 Conclusion We presented a broad analysis of the problem of ensemble structured prediction, including a series of algorithms with learning guarantees and extensive experiments. Our results show that our algorithms, most notably HMVote, can result in significant benefits in several tasks, which can be of a critical practical importance. We also reported very favorable results for HMVote when used with the edit-distance, which is the standard loss used in many applications. A natural extension of this work consists of devising new algorithms and providing learning guarantees specific to other loss functions such as the edit-distance. While we aimed for an exhaustive study, including multiple on-learning algorithms, different conversions to batch and derandomizations, we are aware that the problem we studied is very rich and admits many more facets and scenarios that we plan to investigate in the future. Finally, the boosting-style algorithm we presented can be enhanced using recent theoretical and algorithmic results on deep boosting (Cortes et al., 2014b). Acknowledgments We warmly thank our colleagues Francoise Beaufays and Fuchun Peng for kindly extracting and making available to us the pronunciation data sets, Cyril Allauzen for providing us with the speech recognition data, and Richard Sproat and Brian Roark for help with other data sets. This work was partly funded by the NSF award IIS-1117591 and the NSERC PGS D3 award. 10 References [Breiman1996] Leo Breiman. 1996. Bagging predictors. Machine Learning, 24(2):123–140. [Caruana et al.2004] R. Caruana, A. Niculescu-Mizil, G. Crew, and A. Ksikes. 2004. Ensemble selection from libraries of models. In Proceedings of ICML, pages 18–. [Cesa-Bianchi et al.2004] N. Cesa-Bianchi, A. Conconi, and C. Gentile. 2004. On the generalization ability of on-line learning algorithms. IEEE Transactions on Information Theory, 50(9):2050–2057. [Collins and Koo2005] Michael Collins and Terry Koo. 2005. Discriminative reranking for natural language parsing. Computational Linguistics, 31(1):25–70. [Collins2002] M. Collins. 2002. Discriminative training methods for hidden Markov models: theory and experiments with perceptron algorithms. In Proceedings of ACL, pages 1–8. [Cortes et al.2005] C. Cortes, M. Mohri, and J. Weston. 2005. A general regression technique for learning transductions. In Proceedings of ICML 2005, pages 153–160, New York, NY, USA. ACM. [Cortes et al.2014a] Corinna Cortes, Vitaly Kuznetsov, and Mehryar Mohri. 2014a. Ensemble methods for structured prediction. In Proceedings of ICML. [Cortes et al.2014b] Corinna Cortes, Mehryar Mohri, and Umar Syed. 2014b. Deep boosting. In Proceedings of the Fourteenth International Conference on Machine Learning (ICML 2014). [Dekel and Singer2005] O. Dekel and Y. Singer. 2005. Data-driven online to batch conversion. In Advances in NIPS 18, pages 1207–1216. [Fiscus1997] Jonathan G Fiscus. 1997. Postprocessing system to yield reduced word error rates: Recognizer output voting error reduction (rover). In Proceedings of the 1997 IEEE ASRU Workshop, pages 347–354, Santa Barbara, CA. [Freund and Schapire1997] Y. Freund and R. Schapire. 1997. A decision-theoretic generalization of on-line learning and application to boosting. Journal of Computer and System Sciences, 55(1):119–139. [Freund et al.2004] Yoav Freund, Yishay Mansour, and Robert E. Schapire. 2004. Generalization bounds for averaged classifiers. The Annals of Statistics, 32:1698– 1722. [Ghoshal et al.2009] Arnab Ghoshal, Martin Jansche, Sanjeev Khudanpur, Michael Riley, and Morgan Ulinski. 2009. Web-derived pronunciations. In Proceedings of ICASSP, pages 4289–4292. [Joachims2008] T. Joachims. 2008. Support vector machines for complex outputs. [Kalai and Vempala2005] A. Kalai and S. Vempala. 2005. Efficient algorithms for online decision problems. Journal of Computer and System Sciences, 71(3):291–307. [Kocev et al.2013] D. Kocev, C. Vens, J. Struyf, and S. Deroski. 2013. Tree ensembles for predicting structured outputs. Pattern Recognition, 46(3):817–833, March. [Koltchinskii and Panchenko2002] Vladmir Koltchinskii and Dmitry Panchenko. 2002. Empirical margin distributions and bounding the generalization error of combined classifiers. Annals of Statistics, 30. [Lafferty et al.2001] J. Lafferty, A. McCallum, and F. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of ICML, pages 282–289. [Littlestone and Warmuth1994] N. Littlestone and M. Warmuth. 1994. The weighted majority algorithm. Information and Computation, 108(2):212–261. [Littlestone1989] N. Littlestone. 1989. From on-line to batch learning. In Proceedings of COLT 2, pages 269–284. [MacKay1991] David J. C. MacKay. 1991. Bayesian methods for adaptive models. Ph.D. thesis, California Institute of Technology. [MacKay1997] David J.C. MacKay. 1997. Ensemble learning for hidden markov models. Technical report, Cavendish Laboratory, Cambridge UK. [Mohri et al.2008] Mehryar Mohri, Fernando C. N. Pereira, and Michael Riley. 2008. Speech recognition with weighted finite-state transducers. In Handbook on Speech Processing and Speech Communication, Part E: Speech recognition. Springer-Verlag. [Mohri et al.2012] Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar. 2012. Foundations of Machine Learning. The MIT Press. [Mohri1997] Mehryar Mohri. 1997. Finite-state transducers in language and speech processing. Computational Linguistics, 23(2):269–311. [Nguyen and Guo2007] N. Nguyen and Y. Guo. 2007. Comparison of sequence labeling algorithms and extensions. In Proceedings of ICML, pages 681–688. [Okazaki2007] N. Okazaki. 2007. CRFsuite: a fast implementation of conditional random fields (crfs). [Petrov2010] Slav Petrov. 2010. Products of random latent variable grammars. In HLT-NAACL, pages 19– 27. [Rafferty et al.2014] A. Rafferty, A. Kleeman, J. Finkel, and C. Manning. 2014. Stanford classifer. [Sagae and Lavie2006] K. Sagae and A. Lavie. 2006. Parser combination by reparsing. In Proceedings of HLT/NAACL, pages 129–132. [Schapire and Singer1999] Robert E. Schapire and Yoram Singer. 1999. Improved boosting algorithms 11 using confidence-rated predictions. Machine Learning, 37(3):297–336. [Schapire and Singer2000] Robert E. Schapire and Yoram Singer. 2000. Boostexter: A boosting-based system for text categorization. Machine Learning, 39(2-3):135–168. [Schapire et al.1997] Robert E. Schapire, Yoav Freund, Peter Bartlett, and Wee Sun Lee. 1997. Boosting the margin: A new explanation for the effectiveness of voting methods. In ICML, pages 322–330. [Smyth and Wolpert1999] Padhraic Smyth and David Wolpert. 1999. Linearly combining density estimators via stacking. Machine Learning, 36:59–83, July. [Takimoto and Warmuth2003] E. Takimoto and M. K. Warmuth. 2003. Path kernels and multiplicative updates. JMLR, 4:773–818. [Taskar et al.2004] B. Taskar, C. Guestrin, and D. Koller. 2004. Max-margin Markov networks. In Advances in NIPS 16. MIT Press, Cambridge, MA. [Tsochantaridis et al.2005] I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun. 2005. Large margin methods for structured and interdependent output variables. JMLR, 6:1453–1484, December. [Wang et al.2007] Q. Wang, D. Lin, and D. Schuurmans. 2007. Simple training of dependency parsers via structured boosting. In Proceedings of IJCAI 20, pages 1756–1762. [Zeman and ˇZabokrtsk´y2005] D. Zeman and Z. ˇZabokrtsk´y. 2005. Improving parsing accuracy by combining diverse dependency parsers. In Proceedings of IWPT 9, pages 171–178. [Zhang et al.2009] H. Zhang, M. Zhang, C. Tan, and H. Li. 2009. K-best combination of syntactic parsers. In Proceedings of EMNLP: Volume 3, pages 1552– 1560. 12
2014
1
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 100–110, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Lattice Desegmentation for Statistical Machine Translation Mohammad Salameh† Colin Cherry‡ Grzegorz Kondrak† †Department of Computing Science ‡National Research Council Canada University of Alberta 1200 Montreal Road Edmonton, AB, T6G 2E8, Canada Ottawa, ON, K1A 0R6, Canada {msalameh,gkondrak}@ualberta.ca [email protected] Abstract Morphological segmentation is an effective sparsity reduction strategy for statistical machine translation (SMT) involving morphologically complex languages. When translating into a segmented language, an extra step is required to desegment the output; previous studies have desegmented the 1-best output from the decoder. In this paper, we expand our translation options by desegmenting n-best lists or lattices. Our novel lattice desegmentation algorithm effectively combines both segmented and desegmented views of the target language for a large subspace of possible translation outputs, which allows for inclusion of features related to the desegmentation process, as well as an unsegmented language model (LM). We investigate this technique in the context of English-to-Arabic and English-to-Finnish translation, showing significant improvements in translation quality over desegmentation of 1-best decoder outputs. 1 Introduction Morphological segmentation is considered to be indispensable when translating between English and morphologically complex languages such as Arabic. Morphological complexity leads to much higher type to token ratios than English, which can create sparsity problems during translation model estimation. Morphological segmentation addresses this issue by splitting surface forms into meaningful morphemes, while also performing orthographic transformations to further reduce sparsity. For example, the Arabic noun ÈðYÊË lldwl “to the countries” is segmented as l+ “to” Aldwl “the countries”. When translating from Arabic, this segmentation process is performed as input preprocessing and is otherwise transparent to the translation system. However, when translating into Arabic, the decoder produces segmented output, which must be desegmented to produce readable text. For example, l+ Aldwl must be converted to lldwl. Desegmentation is typically performed as a post-processing step that is independent from the decoding process. While this division of labor is useful, the pipeline approach may prevent the desegmenter from recovering from errors made by the decoder. Despite the efforts of the decoder’s various component models, the system may produce mismatching segments, such as s+ hzymp, which pairs the future particle s+ “will” with a noun hzymp “defeat”, instead of a verb. In this scenario, there is no right desegmentation; the postprocessor has been dealt a losing hand. In this work, we show that it is possible to maintain the sparsity-reducing benefit of segmentation while translating directly into unsegmented text. We desegment a large set of possible decoder outputs by processing n-best lists or lattices, which allows us to consider both the segmented and desegmented output before locking in the decoder’s decision. We demonstrate that significant improvements in translation quality can be achieved by training a linear model to re-rank this transformed translation space. 2 Related Work Translating into morphologically complex languages is a challenging and interesting task that has received much recent attention. Most techniques approach the problem by transforming the target language in some manner before training the translation model. They differ in what transformations are performed and at what stage they are reversed. The transformation might take the form of a morphological analysis or a morphological segmentation. 100 2.1 Morphological Analysis Many languages have access to morphological analyzers, which annotate surface forms with their lemmas and morphological features. Bojar (2007) incorporates such analyses into a factored model, to either include a language model over target morphological tags, or model the generation of morphological features. Other approaches train an SMT system to predict lemmas instead of surface forms, and then inflect the SMT output as a postprocessing step (Minkov et al., 2007; Clifton and Sarkar, 2011; Fraser et al., 2012; El Kholy and Habash, 2012b). Alternatively, one can reparameterize existing phrase tables as exponential models, so that translation probabilities account for source context and morphological features (Jeong et al., 2010; Subotin, 2011). Of these approaches, ours is most similar to the translate-then-inflect approach, except we translate and then desegment. In particular, Toutanova et al. (2008) inflect and re-rank n-best lists in a similar manner to how we desegment and re-rank n-best lists or lattices. 2.2 Morphological Segmentation Instead of producing an abstract feature layer, morphological segmentation transforms the target sentence by segmenting relevant morphemes, which are then handled as regular tokens during alignment and translation. This is done to reduce sparsity and to improve correspondence with the source language (usually English). Such a segmentation can be produced as a byproduct of analysis (Oflazer and Durgar El-Kahlout, 2007; Badr et al., 2008; El Kholy and Habash, 2012a), or may be produced using an unsupervised morphological segmenter such as Morfessor (Luong et al., 2010; Clifton and Sarkar, 2011). Work on target language morphological segmentation for SMT can be divided into three subproblems: segmentation, desegmentation and integration. Our work is concerned primarily with the integration problem, but we will discuss each subproblem in turn. The usefulness of a target segmentation depends on its correspondence to the source language. If a morphological feature does not manifest itself as a separate token in the source, then it may be best to leave its corresponding segment attached to the stem. A number of studies have looked into what granularity of segmentation is best suited for a particular language pair (Oflazer and Durgar El-Kahlout, 2007; Badr et al., 2008; Clifton and Sarkar, 2011; El Kholy and Habash, 2012a). Since our focus here is on integrating segmentation into the decoding process, we simply adopt the segmentation strategies recommended by previous work: the Penn Arabic Treebank scheme for English-Arabic (El Kholy and Habash, 2012a), and an unsupervised scheme for EnglishFinnish (Clifton and Sarkar, 2011). Desegmentation is the process of converting segmented words into their original surface form. For many segmentations, especially unsupervised ones, this amounts to simple concatenation. However, more complex segmentations, such as the Arabic tokenization provided by MADA (Habash et al., 2009), require further orthographic adjustments to reverse normalizations performed during segmentation. Badr et al. (2008) present two Arabic desegmentation schemes: table-based and rule-based. El Kholy and Habash (2012a) provide an extensive study on the influence of segmentation and desegmentation on English-toArabic SMT. They introduce an additional desegmentation technique that augments the table-based approach with an unsegmented language model. Salameh et al. (2013) replace rule-based desegmentation with a discriminatively-trained character transducer. In this work, we adopt the Table+Rules approach of El Kholy and Habash (2012a) for English-Arabic, while concatenation is sufficient for English-Finnish. Work on integration attempts to improve SMT performance for morphologically complex target languages by going beyond simple pre- and postprocessing. Oflazer and Durgar El-Kahlout (2007) desegment 1000-best lists for English-to-Turkish translation to enable scoring with an unsegmented language model. Unlike our work, they replace the segmented language model with the unsegmented one, allowing them to tune the linear model parameters by hand. We use both segmented and unsegmented language models, and tune automatically to optimize BLEU. Like us, Luong et al. (2010) tune on unsegmented references,1 and translate with both segmented and unsegmented language models for English-to-Finnish translation. However, they adopt a scheme of word-boundary-aware 1Tuning on unsegmented references does not require substantial modifications to the standard SMT pipeline. For example, Badr et al. (2008) also tune on unsegmented references by simply desegmenting SMT output before MERT collects sufficient statistics for BLEU. 101 morpheme-level phrase extraction, meaning that target phrases include only complete words, though those words are segmented into morphemes. This enables full decoder integration, where we do n-best and lattice re-ranking. But it also comes at a substantial cost: when target phrases include only complete words, the system can only generate word forms that were seen during training. In this setting, the sparsity reduction from segmentation helps word alignment and target language modeling, but it does not result in a more expressive translation model. Furthermore, it becomes substantially more difficult to have non-adjacent source tokens contribute morphemes to a single target word. For example, when translating “with his blue car” into the Arabic ZA¯P QË@ éKPAJ ‚. bsyArth AlzrqA’, the target word bsyArth is composed of three tokens: b+ “with”, syArp “car” and +h “his”. With word-boundaryaware phrase extraction, a phrase pair containing all of “with his blue car” must have been seen in the parallel data to translate the phrase correctly at test time. With lattice desegmentation, we need only to have seen AlzrqA’ “blue” and the three morphological pieces of bsyArth for the decoder and desegmenter to assemble the phrase. 3 Methods Our goal in this work is to benefit from the sparsity-reducing properties of morphological segmentation while simultaneously allowing the system to reason about the final surface forms of the target language. We approach this problem by augmenting an SMT system built over target segments with features that reflect the desegmented target words. In this section, we describe our various strategies for desegmenting the SMT system’s output space, along with the features that we add to take advantage of this desegmented view. 3.1 Baselines The two obvious baseline approaches each decode using one view of the target language. The unsegmented approach translates without segmenting the target. This trivially allows for an unsegmented language model and never makes desegmentation errors. However, it suffers from data sparsity and poor token-to-token correspondence with the source language. The one-best desegmentation approach segments the target language at training time and then desegments the one-best output in postprocessing. This resolves the sparsity issue, but does not allow the decoder to take into account features of the desegmented target. To the best of our knowledge, we are the first group to go beyond one-best desegmentation for English-to-Arabic translation. In English-to-Finnish, although alternative integration strategies have seen some success (Luong et al., 2010), the current state-ofthe-art performs one-best-desegmentation (Clifton and Sarkar, 2011). 3.2 n-best Desegmentation The one-best approach can be extended easily by desegmenting n-best lists of segmented decoder output. Doing so enables the inclusion of an unsegmented target language model, and with a small amount of bookkeeping, it also allows the inclusion of features related to the operations performed during desegmentation (see Section 3.4). With new features reflecting the desegmented output, we can re-tune our enhanced linear model on a development set. Following previous work, we will desegment 1000-best lists (Oflazer and Durgar El-Kahlout, 2007). Once n-best lists have been desegmented, we can tune on unsegmented references as a sidebenefit. This could improve translation quality, as it brings our training scenario closer to our test scenario (test BLEU is always measured on unsegmented references). In particular, it could address issues with translation length mismatch. Previous work that has tuned on unsegmented references has reported mixed results (Badr et al., 2008; Luong et al., 2010). 3.3 Lattice Desegmentation An n-best list reflects a tiny portion of a decoder’s search space, typically fixed at 1000 hypotheses. Lattices2 can represent an exponential number of hypotheses in a compact structure. In this section, we discuss how a lattice from a multi-stack phrasebased decoder such as Moses (Koehn et al., 2007) can be desegmented to enable word-level features. Finite State Analogy A phrase-based decoder produces its output from left to right, with each operation appending the translation of a source phrase to a growing target hypothesis. Translation continues un2Or forests for hierarchical and syntactic decoders. 102 0 1 b+ 2 lEbp 5 +hm 4 +hA 3 AlTfl (a)   (b)   (c)   1 AlTfl:AlTfl 0 b+:<epsilon> 2 lEbp:<epsilon> <epsilon>:blEbp +hA:blEbthA +hm:blEbthm 0 5 blEbthm 4 blEbthA 2 blEbp 3 AlTfl Transduces   into   Figure 1: The finite state pipeline for a lattice translating the English fragment “with the child’s game”. The input morpheme lattice (a) is desegmented by composing it with the desegmenting transducer (b) to produce the word lattice (c). The tokens in (a) are: b+ “with”, lEbp “game”, +hm “their”, +hA “her”, and AlTfl“the child”. til each source word has been covered exactly once (Koehn et al., 2003). The search graph of a phrase-based decoder can be interpreted as a lattice, which can be interpreted as a finite state acceptor over target strings. In its most natural form, such an acceptor emits target phrases on each edge, but it can easily be transformed into a form with one edge per token, as shown in Figure 1a. This is sometimes referred to as a word graph (Ueffing et al., 2002), although in our case the segmented phrase table also produces tokens that correspond to morphemes. Our goal is to desegment the decoder’s output lattice, and in doing so, gain access to a compact, desegmented view of a large portion of the translation search space. This can be accomplished by composing the lattice with a desegmenting transducer that consumes morphemes and outputs desegmented words. This transducer must be able to consume every word in our lattice’s output vocabulary. We define a word using the following regular expression: [prefix]* [stem] [suffix]* | [prefix]+ [suffix]+ (1) where [prefix], [stem] and [suffix] are nonoverlapping sets of morphemes, whose members are easily determined using the segmenter’s segment boundary markers.3 The second disjunct of Equation 1 covers words that have no clear stem, such as the Arabic éË lh “for him”, segmented as l+ “for” +h “him”. Equation 1 may need to be modified for other languages or segmentation schemes, but our techniques generalize to any definition that can be written as a regular expression. A desegmenting transducer can be constructed by first encoding our desegmenter as a table that maps morpheme sequences to words. Regardless of whether the original desegmenter was based on concatenation, rules or table-lookup, it can be encoded as a lattice-specific table by applying it to an enumeration of all words found in the lattice. We can then transform that table into a finite state transducer with one path per table entry. Finally, we take the closure of this transducer, so that the resulting machine can transduce any sequence of words. The desegmenting trans3Throughout this paper, we use “+” to mark morphemes as prefixes or suffixes, as in w+ or +h. In Equation 1 only, we overload “+” as the Kleene cross: X+ == XX∗. 103 ducer for our running example is shown in Figure 1b. Note that tokens requiring no desegmentation simply emit themselves. The lattice (Figure 1a) can then be desegmented by composing it with the transducer (1b), producing a desegmented lattice (1c). This is a natural place to introduce features that describe the desegmentation process, such as scores provided by a desegmentation table, which can be incorporated into the desegmenting transducer’s edge weights. We now have a desegmented lattice, but it has not been annotated with an unsegmented (wordlevel) language model. In order to annotate lattice edges with an n-gram LM, every path coming into a node must end with the same sequence of (n−1) tokens. If this property does not hold, then nodes must be split until it does.4 This property is maintained by the decoder’s recombination rules for the segmented LM, but it is not guaranteed for the desegmented LM. Indeed, the expanded word-level context is one of the main benefits of incorporating a word-level LM. Fortunately, LM annotation as well as any necessary lattice modifications can be performed simultaneously by composing the desegmented lattice with a finite state acceptor encoding the LM (Roark et al., 2011). In summary, we are given a segmented lattice, which encodes the decoder’s translation space as an acceptor over morphemes. We compose this acceptor with a desegmenting transducer, and then with an unsegmented LM acceptor, producing a fully annotated, desegmented lattice. Instead of using a tool kit such as OpenFst (Allauzen et al., 2007), we implement both the desegmenting transducer and the LM acceptor programmatically. This eliminates the need to construct intermediate machines, such as the lattice-specific desegmenter in Figure 1b, and facilitates working with edges annotated with feature vectors as opposed to single weights. Programmatic Desegmentation Lattice desegmentation is a non-local lattice transformation. That is, the morphemes forming a word might span several edges, making desegmentation non-trivial. Luong et al. (2010) address this problem by forcing the decoder’s phrase table to respect word boundaries, guaranteeing that each desegmentable token sequence is local to an edge. 4Or the LM composition can be done dynamically, effectively decoding the lattice with a beam or cube-pruned search (Huang and Chiang, 2007). Inspired by the use of non-local features in forest decoding (Huang, 2008), we present an algorithm to find chains of edges that correspond to desegmentable token sequences, allowing lattice desegmentation with no phrase-table restrictions. This algorithm can be seen as implicitly constructing a customized desegmenting transducer and composing it with the input lattice on the fly. Before describing the algorithm, we define some notation. An input morpheme lattice is a triple ⟨ns, N, E⟩, where N is a set of nodes, E is a set of edges, and ns ∈N is the start node that begins each path through the lattice. Each edge e ∈E is a 4-tuple ⟨from, to, lex, w⟩, where from, to ∈N are head and tail nodes, lex is a single token accepted by this edge, and w is the (potentially vector-valued) edge weight. Tokens are drawn from one of three non-overlapping morphosyntactic sets: lex ∈Prefix ∪Stem ∪Suffix, where tokens that do not require desegmentation, such as complete words, punctuation and numbers, are considered to be in Stem. It is also useful to consider the set of all outgoing edges for a node n.out = {e ∈E|e.from = n}. With this notation in place, we can define a chain c to be a sequence of edges [e1 . . . el] such that for 1 ≤i < l : ei.to = ei+1.from. We denote singleton chains with [e], and when unambiguous, we abbreviate longer chains with their start and end node [e1.from →el.to]. A chain is valid if it emits the beginning of a word as defined by the regular expression in Equation 1. A valid chain is complete if its edges form an entire word, and if it is part of a path through the lattice that consists only of words. In Figure 1a, the complete chains are [0 →2], [0 →4], [0 →5], and [2 →3]. The path restriction on complete chains forces words to be bounded by other words in order to be complete.5 For example, if we removed the edge 2 →3 (AlTfl) from Figure 1a, then [0 →2] ([b+ lEbp]) would cease to be a complete chain, but it would still be a valid chain. Note that in the finite-state analogy, the path restriction is implicit in the composition operation. Algorithm 1 desegments a lattice by finding all complete chains and replacing each one with a single edge. It maintains a work list of nodes that lie on the boundary between words, and for each node on this list, it launches a depth first search 5Sentence-initial suffix morphemes and sentence-final prefix morphemes represent a special case that we omit for the sake of brevity. Lacking stems, they are left segmented. 104 Algorithm 1 Desegment a lattice ⟨ns, N, E⟩ {Initialize output lattice and work list WL} n′ s = ns, N ′ = ∅, E′ = ∅, WL = [ns] while n = WL.pop() do {Work on each node only once} if n ∈N ′ then continue N ′ = N ′ ∪{n} {Initialize the chain stack C} C = ∅ for e ∈n.out do if [e] is valid then C.push([e]) {Depth-first search for complete chains} while [e1, . . . , el] = C.pop() do {Attempt to extend chain} for e ∈el.to.out do if [e1 . . . el, e] is valid then C.push([e1, . . . , el, e]) else Mark [e1, . . . , el] as complete {Desegment complete chains} if [e1, . . . , el] is complete then WL.push(el.to) E′ = E′ ∪{deseg([e1, . . . , el])} return ⟨n′ s, N ′, E′⟩ to find all complete chains extending from it. The search recognizes the valid chain c to be complete by finding an edge e such that c + e forms a chain, but not a valid one. By inspection of Equation 1, this can only happen when a prefix or stem follows a stem or suffix, which always marks a word boundary. The chains found by this search are desegmented and then added to the output lattice as edges. The nodes at end points of these chains are added to the work list, as they lie at word boundaries by definition. Note that although this algorithm creates completely new edges, the resulting node set N ′ will be a subset of the input node set N. The complement N −N ′ will consist of nodes that are word-internal in all paths through the input lattice, such as node 1 in Figure 1a. Programmatic LM Integration Programmatic composition of a lattice with an n-gram LM acceptor is a well understood problem. We use a dynamic program to enumerate all (n −1)-word contexts leading into a node, and then split the node into multiple copies, one for each context. With each node corresponding to a single LM context, annotation of outgoing edges with n-gram LM scores is straightforward. 3.4 Desegmentation Features Our re-ranker has access to all of the features used by the decoder, in addition to a number of features enabled by desegmentation. Desegmentation Score We use a table-based desegmentation method for Arabic, which is based on segmenting an Arabic training corpus and memorizing the observed transformations to reverse them later. Finnish does not require a table, as all words can be desegmented with simple concatenation. The Arabic table consists of X →Y entries, where X is a target morpheme sequence and Y is a desegmented surface form. Several entries may share the same X, resulting in multiple desegmentation options. For the sake of symmetry with the unambiguous Finnish case, we augment Arabic n-best lists or lattices with only the most frequent desegmentation Y .6 We provide the desegmentation score log p(Y |X)= log count of X →Y count of X  as a feature, to indicate the entry’s ambiguity in the training data.7 When an X is missing from the table, we fall back on a set of desegmentation rules (El Kholy and Habash, 2012a) and this feature is set to 0. This feature is always 0 for English-Finnish. Contiguity One advantage of our approach is that it allows discontiguous source words to translate into a single target word. In order to maintain some control over this powerful capability, we create three binary features that indicate the contiguity of a desegmentation. The first feature indicates that the desegmented morphemes were translated from contiguous source words. The second indicates that the source words contained a single discontiguity, as in a word-by-word translation of the “with his blue car” example from Section 2.2. The third indicates two or more discontiguities. Unsegmented LM A 5-gram LM trained on unsegmented target text is used to assess the fluency of the desegmented word sequence. 4 Experimental Setup We train our English-to-Arabic system using 1.49 million sentence pairs drawn from the NIST 2012 training set, excluding the UN data. This training set contains about 40 million Arabic tokens before 6Allowing the re-ranker to choose between multiple Y s is a natural avenue for future work. 7We also experimented on log p(X|Y ) as an additional feature, but observed no improvement in translation quality. 105 segmentation, and 47 million after segmentation. We tune on the NIST 2004 evaluation set (1353 sentences) and evaluate on NIST 2005 (1056 sentences). As these evaluation sets are intended for Arabic-to-English translation, we select the first English reference to use as our source text. Our English-to-Finnish system is trained on the same Europarl corpus as Luong et al. (2010) and Clifton and Sarkar (2011), which has roughly one million sentence pairs. We also use their development and test sets (2000 sentences each). 4.1 Segmentation For Arabic, morphological segmentation is performed by MADA 3.2 (Habash et al., 2009), using the Penn Arabic Treebank (PATB) segmentation scheme as recommended by El Kholy and Habash (2012a). For both segmented and unsegmented Arabic, we further normalize the script by converting different forms of Alif @ @ @ @ and Ya ø ø to bare Alif @ and dotless Ya ø. To generate the desegmentation table, we analyze the segmentations from the Arabic side of the parallel training data to collect mappings from morpheme sequences to surface forms. For Finnish, we adopt the Unsup L-match segmentation technique of Clifton and Sarkar (2011), which uses Morfessor (Creutz and Lagus, 2005) to analyze the 5,000 most frequent Finnish words. The analysis is then applied to the Finnish side of the parallel text, and a list of segmented suffixes is collected. To improve coverage, words are further segmented according to their longest matching suffix from the list. As Morfessor does not perform any orthographic normalizations, it can be desegmented with simple concatenation. 4.2 Systems We align the parallel data with GIZA++ (Och et al., 2003) and decode using Moses (Koehn et al., 2007). The decoder’s log-linear model includes a standard feature set. Four translation model features encode phrase translation probabilities and lexical scores in both directions. Seven distortion features encode a standard distortion penalty as well as a bidirectional lexicalized reordering model. A KN-smoothed 5-gram language model is trained on the target side of the parallel data with SRILM (Stolcke, 2002). Finally, we include word and phrase penalties. The decoder uses the default parameters for English-to-Arabic, except that the maximum phrase length is set to 8. For Englishto-Finnish, we follow Clifton and Sarkar (2011) in setting the hypothesis stack size to 100, distortion limit to 6, and maximum phrase length to 20. The decoder’s log-linear model is tuned with MERT (Och, 2003). Re-ranking models are tuned using a batch variant of hope-fear MIRA (Chiang et al., 2008; Cherry and Foster, 2012), using the n-best variant for n-best desegmentation, and the lattice variant for lattice desegmentation. MIRA was selected over MERT because we have an in-house implementation that can tune on lattices very quickly. During development, we confirmed that MERT and MIRA perform similarly, as is expected with fewer than 20 features. Both the decoder’s log-linear model and the re-ranking models are trained on the same development set. Historically, we have not seen improvements from using different tuning sets for decoding and reranking. Lattices are pruned to a density of 50 edges per word before re-ranking. We test four different systems. Our first baseline is Unsegmented, where we train on unsegmented target text, requiring no desegmentation step. Our second baseline is 1-best Deseg, where we train on segmented target text and desegment the decoder’s 1-best output. Starting from the system that produced 1-best Deseg, we then output either 1000-best lists or lattices to create our two experimental systems. The 1000-best Deseg system desegments, augments and re-ranks the decoder’s 1000-best list, while Lattice Deseg does the same in the lattice. We augment n-best lists and lattices using the features described in Section 3.4.8 We evaluate our system using BLEU (Papineni et al., 2002) and TER (Snover et al., 2006). Following Clark et al. (2011), we report average scores over five random tuning replications to account for optimizer instability. For the baselines, this means 5 runs of decoder tuning. For the desegmenting re-rankers, this means 5 runs of reranker tuning, each working on n-best lists or lattices produced by the same (representative) decoder weights. We measure statistical significance using MultEval (Clark et al., 2011), which implements a stratified approximate randomization test to account for multiple tuning replications. 8Development experiments on a small-data English-toArabic scenario indicated that the Desegmentation Score was not particularly useful, so we exclude it from the main comparison, but include it in the ablation experiments. 106 5 Results Tables 1 and 2 report results averaged over 5 tuning replications on English-to-Arabic and Englishto-Finnish, respectively. In all scenarios, both 1000-best Deseg and Lattice Deseg significantly outperform the 1-best Deseg baseline (p < 0.01). For English-to-Arabic, 1-best desegmentation results in a 0.7 BLEU point improvement over training on unsegmented Arabic. Moving to lattice desegmentation more than doubles that improvement, resulting in a BLEU score of 34.4 and an improvement of 1.0 BLEU point over 1-best desegmentation. 1000-best desegmentation also works well, resulting in a 0.6 BLEU point improvement over 1-best. Lattice desegmentation is significantly better (p < 0.01) than 1000-best desegmentation. For English-to-Finnish, the Unsup L-match segmentation with 1-best desegmentation does not improve over the unsegmented baseline. The segmentation may be addressing issues with model sparsity, but it is also introducing errors that would have been impossible had words been left unsegmented. In fact, even with our lattice desegmenter providing a boost, we are unable to see a significant improvement over the unsegmented model. As we attempted to replicate the approach of Clifton and Sarkar (2011) exactly by working with their segmented data, this difference is likely due to changes in Moses since the publication of their result. Nonetheless, the 1000-best and lattice desegmenters both produce significant improvements over the 1-best desegmentation baseline, with Lattice Deseg achieving a 1-point improvement in TER. These results match the established state-of-the-art on this data set, but also indicate that there is still room for improvement in identifying the best segmentation strategy for Englishto-Finnish translation. We also tried a similar Morfessor-based segmentation for Arabic, which has an unsegmented test set BLEU of 32.7. As in Finnish, the 1-best desegmentation using Morfessor did not surpass the unsegmented baseline, producing a test BLEU of only 31.4 (not shown in Table 1). Lattice desegmentation was able to boost this to 32.9, slightly above 1-best desegmentation, but well below our best MADA desegmentation result of 34.4. There appears to be a large advantage to using MADA’s supervised segmentation in this scenario. Model Dev Test BLEU BLEU TER Unsegmented 24.4 32.7 49.4 1-best Deseg 24.4 33.4 48.6 1000-best Deseg 25.0 34.0 48.0 Lattice Deseg 25.2 34.4 47.7 Table 1: Results for English-to-Arabic translation using MADA’s PATB segmentation. Model Dev Test BLEU BLEU TER Unsegmented 15.4 15.1 70.8 1-best Deseg 15.3 14.8 71.9 1000-best Deseg 15.4 15.1 71.5 Lattice Deseg 15.5 15.1 70.9 Table 2: Results for English-to-Finnish translation using unsupervised segmentation. 5.1 Ablation We conducted an ablation experiment on Englishto-Arabic to measure the impact of the various features described in Section 3.4. Table 3 compares different combinations of features using lattice desegmentation. The unsegmented LM alone yields a 0.4 point improvement over the 1-best desegmentation score. Adding contiguity indicators on top of the unsegmented LM results in another 0.6 point improvement. As anticipated, the tuner assigns negative weights to discontiguous cases, encouraging the re-ranker to select a safer translation path when possible. Judging from the output on the NIST 2005 test set, the system uses these discontiguous desegmentations very rarely: only 5% of desegmented tokens align to discontiguous source phrases. Adding the desegmentation score to these two feature groups does not improve performance, confirming the results we observed during development. The desegmentation score would likely be useful in a scenario where we provide multiple desegmentation options to the re-ranker; for now, it indicates only the ambiguity of a fixed choice, and is likely redundant with information provided by the language model. 5.2 Error Analysis In order to better understand the source of our improvements in the English-to-Arabic scenario, we conducted an extensive manual analysis of the differences between 1-best and lattice deseg107 Features dev test 1-best Deseg 24.5 33.4 + Unsegmented LM 24.9 33.8 + Contiguity 25.2 34.4 + Desegmentation Score 25.2 34.3 Table 3: The effect of feature ablation on BLEU score for English-to-Arabic translation with lattice desegmentation. mentation on our test set. We compared the output of the two systems using the Unix tool wdiff, which transforms a solution to the longestcommon-subsequence problem into a sequence of multi-word insertions and deletions (Hunt and McIlroy, 1976). We considered adjacent insertiondeletion pairs to be (potentially phrasal) substitutions, and collected them into a file, omitting any unpaired insertions or deletions. We then sampled 650 cases where the two sides of the substitution were deemed to be related, and divided these cases into categories based on how the lattice desegmentation differs from the one-best desegmentation. We consider a phrase to be correct only if it can be found in the reference. Table 4 breaks down per-phrase accuracy according to four manually-assigned categories: (1) clitical – the two systems agree on a stem, but at least one clitic, often a prefix denoting a preposition or determiner, was dropped, added or replaced; (2) lexical – a word was changed to a morphologically unrelated word with a similar meaning; (3) inflectional – the words have the same stem, but different inflection due to a change in gender, number or verb tense; (4) part-of-speech – the two systems agree on the lemma, but have selected different parts of speech. For each case covering a single phrasal difference, we compare the phrases from each system to the reference. We report the number of instances where each system matched the reference, as well as cases where they were both incorrect. The majority of differences correspond to clitics, whose correction appears to be a major source of the improvements obtained by lattice desegmentation. This category is challenging for the decoder because English prepositions tend to correspond to multiple possible forms when translated into Arabic. It also includes the frequent cases involving the nominal determiner prefix Al “the” (left unsegmented by the PATB scheme), and the Lattice Correct 1-best Correct Both Incorrect Clitical 157 71 79 Lexical 61 39 80 Inflectional 37 32 47 Part-of-speech 19 17 11 Table 4: Error analysis for English-to-Arabic translation based on 650 sampled instances. sentence-initial conjunction w+ “and”. The second most common category is lexical, where the unsegmented LM has drastically altered the choice of translation. The remaining categories show no major advantage for either system. 6 Conclusion We have explored deeper integration of morphological desegmentation into the statistical machine translation pipeline. We have presented a novel, finite-state-inspired approach to lattice desegmentation, which allows the system to account for a desegmented view of many possible translations, without any modification to the decoder or any restrictions on phrase extraction. When applied to English-to-Arabic translation, lattice desegmentation results in a 1.0 BLEU point improvement over one-best desegmentation, and a 1.7 BLEU point improvement over unsegmented translation. We have also applied our approach to English-toFinnish translation, and although segmentation in general does not currently help, we are able to show significant improvements over a 1-best desegmentation baseline. In the future, we plan to explore introducing multiple segmentation options into the lattice, and the application of our method to a full morphological analysis (as opposed to segmentation) of the target language. Eventually, we would like to replace the functionality of factored translation models (Koehn and Hoang, 2007) with lattice transformation and augmentation. Acknowledgments Thanks to Ann Clifton for generously providing the data and segmentation for our English-toFinnish experiments, and to Marine Carpuat and Roland Kuhn for their helpful comments on an earlier draft. This research was supported by the Natural Sciences and Engineering Research Council of Canada. 108 References Cyril Allauzen, Michael Riley, Johan Schalkwyk, Wojciech Skut, and Mehryar Mohri. 2007. OpenFst: A general and efficient weighted finite-state transducer library. In Proceedings of the Ninth International Conference on Implementation and Application of Automata, (CIAA 2007), volume 4783 of Lecture Notes in Computer Science, pages 11–23. Springer. http://www.openfst.org. Ibrahim Badr, Rabih Zbib, and James Glass. 2008. Segmentation for English-to-Arabic statistical machine translation. In Proceedings of ACL, pages 153–156. Ondˇrej Bojar. 2007. English-to-Czech factored machine translation. In Proceedings of the Second Workshop on Statistical Machine Translation, pages 232–239, Prague, Czech Republic, June. Colin Cherry and George Foster. 2012. Batch tuning strategies for statistical machine translation. In Proceedings of HLT-NAACL, Montreal, Canada, June. David Chiang, Yuval Marton, and Philip Resnik. 2008. Online large-margin training of syntactic and structural translation features. In Proceedings of EMNLP, pages 224–233. Jonathan H. Clark, Chris Dyer, Alon Lavie, and Noah A. Smith. 2011. Better hypothesis testing for statistical machine translation: Controlling for optimizer instability. In Proceedings of ACL, pages 176–181. Ann Clifton and Anoop Sarkar. 2011. Combining morpheme-based machine translation with postprocessing morpheme prediction. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 32–42, Portland, Oregon, USA, June. Mathias Creutz and Krista Lagus. 2005. Inducing the morphological lexicon of a natural language from unannotated text. In In Proceedings of the International and Interdisciplinary Conference on Adaptive Knowledge Representation and Reasoning (AKRR05, pages 106–113. Ahmed El Kholy and Nizar Habash. 2012a. Orthographic and morphological processing for English— Arabic statistical machine translation. Machine Translation, 26(1-2):25–45, March. Ahmed El Kholy and Nizar Habash. 2012b. Translate, predict or generate: Modeling rich morphology in statistical machine translation. Proceeding of the Meeting of the European Association for Machine Translation. Alexander Fraser, Marion Weller, Aoife Cahill, and Fabienne Cap. 2012. Modeling inflection and wordformation in SMT. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 664–674, Avignon, France, April. Association for Computational Linguistics. Nizar Habash, Owen Rambow, and Ryan Roth. 2009. Mada+tokan: A toolkit for Arabic tokenization, diacritization, morphological disambiguation, POS tagging, stemming and lemmatization. In Khalid Choukri and Bente Maegaard, editors, Proceedings of the Second International Conference on Arabic Language Resources and Tools, Cairo, Egypt, April. The MEDAR Consortium. Liang Huang and David Chiang. 2007. Forest rescoring: Faster decoding with integrated language models. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 144–151, Prague, Czech Republic, June. Liang Huang. 2008. Forest reranking: Discriminative parsing with non-local features. In Proceedings of ACL-08: HLT, pages 586–594, Columbus, Ohio, June. James W. Hunt and M. Douglas McIlroy. 1976. An algorithm for differential file comparison. Technical report, Bell Laboratories, June. Minwoo Jeong, Kristina Toutanova, Hisami Suzuki, and Chris Quirk. 2010. A discriminative lexicon model for complex morphology. In The Ninth Conference of the Association for Machine Translation in the Americas. Philipp Koehn and Hieu Hoang. 2007. Factored translation models. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 868– 876, Prague, Czech Republic, June. Association for Computational Linguistics. Philipp Koehn, Franz Joesef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of HLT-NAACL, pages 127–133. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions, pages 177–180, Prague, Czech Republic, June. Association for Computational Linguistics. Minh-Thang Luong, Preslav Nakov, and Min-Yen Kan. 2010. A hybrid morpheme-word representation for machine translation of morphologically rich languages. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 148–157, Cambridge, MA, October. 109 Einat Minkov, Kristina Toutanova, and Hisami Suzuki. 2007. Generating complex morphology for machine translation. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 128–135, Prague, Czech Republic, June. Franz Josef Och, Hermann Ney, Franz Josef, and Och Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29. Franz Joseph Och. 2003. Minimum error rate training for statistical machine translation. In Proceedings of ACL, pages 160–167. Kemal Oflazer and Ilknur Durgar El-Kahlout. 2007. Exploring different representational units in English-to-Turkish statistical machine translation. In Proceedings of the Second Workshop on Statistical Machine Translation, pages 25–32, Prague, Czech Republic, June. Kishore Papineni, Salim Roukos, Todd Ward, and Wei jing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318. Brian Roark, Richard Sproat, and Izhak Shafran. 2011. Lexicographic semirings for exact automata encoding of sequence models. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 1–5, Portland, Oregon, USA, June. Mohammad Salameh, Colin Cherry, and Grzegorz Kondrak. 2013. Reversing morphological tokenization in English-to-Arabic SMT. In Proceedings of the 2013 NAACL HLT Student Research Workshop, pages 47–53, Atlanta, Georgia, June. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of Association for Machine Translation in the Americas. Andreas Stolcke. 2002. Srilm - an extensible language modeling toolkit. In Intl. Conf. Spoken Language Processing, pages 901–904. Michael Subotin. 2011. An exponential translation model for target language morphology. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 230–238, Portland, Oregon, USA, June. Kristina Toutanova, Hisami Suzuki, and Achim Ruopp. 2008. Applying morphology generation models to machine translation. In Proceedings of ACL-08: HLT, pages 514–522, Columbus, Ohio, June. Nicola Ueffing, Franz J. Och, and Hermann Ney. 2002. Generation of word graphs in statistical machine translation. In Proceedings of EMNLP, pages 156– 163, Philadelphia, PA, July. 110
2014
10
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1062–1072, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Spectral Unsupervised Parsing with Additive Tree Metrics Ankur P. Parikh School of Computer Science Carnegie Mellon University [email protected] Shay B. Cohen School of Informatics University of Edinburgh [email protected] Eric P. Xing School of Computer Science Carnegie Mellon University [email protected] Abstract We propose a spectral approach for unsupervised constituent parsing that comes with theoretical guarantees on latent structure recovery. Our approach is grammarless – we directly learn the bracketing structure of a given sentence without using a grammar model. The main algorithm is based on lifting the concept of additive tree metrics for structure learning of latent trees in the phylogenetic and machine learning communities to the case where the tree structure varies across examples. Although finding the “minimal” latent tree is NP-hard in general, for the case of projective trees we find that it can be found using bilexical parsing algorithms. Empirically, our algorithm performs favorably compared to the constituent context model of Klein and Manning (2002) without the need for careful initialization. 1 Introduction Solutions to the problem of grammar induction have been long sought after since the early days of computational linguistics and are interesting both from cognitive and engineering perspectives. Cognitively, it is more plausible to assume that children obtain only terminal strings of parse trees and not the actual parse trees. This means the unsupervised setting is a better model for studying language acquisition. From the engineering perspective, training data for unsupervised parsing exists in abundance (i.e. sentences and part-of-speech tags), and is much cheaper than the syntactically annotated data required for supervised training. Most existing solutions treat the problem of unsupervised parsing by assuming a generative process over parse trees e.g. probabilistic context free grammars (Jelinek et al., 1992), and the constituent context model (Klein and Manning, 2002). Learning then reduces to finding a set of parameters that are estimated by identifying a local maximum of an objective function such as the likelihood (Klein and Manning, 2002) or a variant of it (Smith and Eisner, 2005; Cohen and Smith, 2009; Headden et al., 2009; Spitkovsky et al., 2010b; Gillenwater et al., 2010; Golland et al., 2012). Unfortunately, finding the global maximum for these objective functions is usually intractable (Cohen and Smith, 2012) which often leads to severe local optima problems (but see Gormley and Eisner, 2013). Thus, strong experimental results are often achieved by initialization techniques (Klein and Manning, 2002; Gimpel and Smith, 2012), incremental dataset use (Spitkovsky et al., 2010a) and other specialized techniques to avoid local optima such as count transforms (Spitkovsky et al., 2013). These approaches, while empirically promising, generally lack theoretical justification. On the other hand, recently proposed spectral methods approach the problem via restriction of the PCFG model (Hsu et al., 2012) or matrix completion (Bailly et al., 2013). These novel perspectives offer strong theoretical guarantees but are not designed to achieve competitive empirical results. In this paper, we suggest a different approach, to provide a first step to bridging this theoryexperiment gap. More specifically, we approach unsupervised constituent parsing from the perspective of structure learning as opposed to parameter learning. We associate each sentence with an undirected latent tree graphical model, which is a tree consisting of both observed variables (corresponding to the words in the sentence) and an additional set of latent variables that are unobserved in the data. This undirected latent tree is then directed via a direction mapping to give the final constituent parse. In our framework, parsing reduces to finding the best latent structure for a given sentence. However, due to the presence of latent variables, structure learning of latent trees is substantially more complicated than in observed models. As before, one solution would be local search heuristics. Intuitively, however, latent tree models encode low rank dependencies among the observed variables permitting the development of “spec1062 tral” methods that can lead to provably correct solutions. In particular we leverage the concept of additive tree metrics (Buneman, 1971; Buneman, 1974) in phylogenetics and machine learning that can create a special distance metric among the observed variables as a function of the underlying spectral dependencies (Choi et al., 2011; Song et al., 2011; Anandkumar et al., 2011; Ishteva et al., 2012). Additive tree metrics can be leveraged by “meta-algorithms” such as neighbor-joining (Saitou and Nei, 1987) and recursive grouping (Choi et al., 2011) to provide consistent learning algorithms for latent trees. Moreover, we show that it is desirable to learn the “minimal” latent tree based on the tree metric (“minimum evolution” in phylogenetics). While this criterion is in general NP-hard (Desper and Gascuel, 2005), for projective trees we find that a bilexical parsing algorithm can be used to find an exact solution efficiently (Eisner and Satta, 1999). Unlike in phylogenetics and graphical models, where a single latent tree is constructed for all the data, in our case, each part of speech sequence is associated with its own parse tree. This leads to a severe data sparsity problem even for moderately long sentences. To handle this issue, we present a strategy that is inspired by ideas from kernel smoothing in the statistics community (Zhou et al., 2010; Kolar et al., 2010b; Kolar et al., 2010a). This allows principled sharing of samples from different but similar underlying distributions. We provide theoretical guarantees on the recovery of the correct underlying latent tree and characterize the associated sample complexity under our technique. Empirically we evaluate our method on data in English, German and Chinese. Our algorithm performs favorably to Klein and Manning’s (2002) constituent-context model (CCM), without the need for careful initialization. In addition, we also analyze CCM’s sensitivity to initialization, and compare our results to Seginer’s algorithm (Seginer, 2007). 2 Learning Setting and Model In this section, we detail the learning setting and a conditional tree model we learn the structure for. 2.1 Learning Setting Let w = (w1, ..., wℓ) be a vector of words corresponding to a sentence of length ℓ. Each wi is represented by a vector in Rp for p ∈N. The vector is an embedding of the word in some space, choVBD DT NN VBD DT NN Figure 2: Candidate constituent parses for x = (VBD, DT, NN) (left-correct, right -incorrect) sen from a fixed dictionary that maps word types to Rp. In addition, let x = (x1, ..., xℓ) be the associated vector of part-of-speech (POS) tags (i.e. xi is the POS tag of wi). In our learning algorithm, we assume that examples of the form (w(i), x(i)) for i ∈[N] = {1, . . . , N} are given, and the goal is to predict a bracketing parse tree for each of these examples. The word embeddings are used during the learning process, but the final decoder that the learning algorithm outputs maps a POS tag sequence x to a parse tree. While ideally we would want to use the word information in decoding as well, much of the syntax of a sentence is determined by the POS tags, and relatively high level of accuracy can be achieved by learning, for example, a supervised parser from POS tag sequences. Just like our decoder, our model assumes that the bracketing of a given sentence is a function of its POS tags. The POS tags are generated from some distribution, followed by a deterministic generation of the bracketing parse tree. Then, latent states are generated for each bracket, and finally, the latent states at the yield of the bracketing parse tree generate the words of the sentence (in the form of embeddings). The latent states are represented by vectors z ∈Rm where m < p. 2.2 Intuition For intuition, consider the simple tag sequence x = (VBD, DT, NN). Two candidate constituent parse structures are shown in Figure 2 and the correct one is boxed in green (the other in red). Recall that our training data contains word phrases that have the tag sequence x e.g. w(1) = (hit, the, ball), w(2) = (ate, an, apple). Intuitively, the words in the above phrases exhibit dependencies that can reveal the parse structure. The determiner (w2) and the direct object (w3) are correlated in that the choice of determiner depends on the plurality of w3. However, the choice of verb (w1) is mostly independent of the determiner. We could thus conclude that w2 and w3 should be closer in the parse tree than w1 1063 The bear ate the fish 𝑤1 , 𝑤2 , 𝑤3 , 𝑤4 , 𝑤5, 𝑧1, 𝑧2, 𝑧3 𝒙= (𝐷𝑇, 𝑁𝑁, 𝑉𝐵𝐷, 𝐷𝑇, 𝑁𝑁) 𝑢(𝒙) ((DT NN) (VBD (DT NN))) w1 w2 w3 z3 z1 w4 w5 z2 w1 w2 w3 z3 z1 w4 w5 z2 Figure 1: Example for the tag sequence (DT, NN, VBD, DT, NN) showing the overview of our approach. We first learn a undirected latent tree for the sequence (left). We then apply a direction mapping hdir to direct the latent tree (center). This can then easily be converted into a bracketing (right). and w2, giving us the correct structure. Informally, the latent state z corresponding to the (w2, w3) bracket would store information about the plurality of z, the key to the dependence between w2 and w3. It would then be reasonable to assume that w2 and w3 are independent given z. 2.3 A Conditional Latent Tree Model Following this intuition, we propose to model the distribution over the latent bracketing states and words for each tag sequence x as a latent tree graphical model, which encodes conditional independences among the words given the latent states. Let V := {w1, ..., wℓ, z1, ..., zH}, with wi representing the word embeddings, and zi representing the latent states of the bracketings. Then, according to our base model it holds that: p(w, z|x) = H Y i=1 p(zi|πx(zi), θ(x)) × ℓ(x) Y i=1 p(wi|πx(wi), θ(x)) (1) where πx(·) returns the parent node index of the argument in the latent tree corresponding to tag sequence x.1 If z is the root, then πx(z) = ∅. All the wi are assumed to be leaves while all the zi are internal (i.e. non-leaf) nodes. The parameters θ(x) control the conditional probability tables. We do not commit to a certain parametric family, but see more about the assumptions we make about θ in §3.2. The parameter space is denoted Θ. The model assumes a factorization according to a latent-variable tree. The latent variables can incorporate various linguistic properties, such as head information, valence of dependency being generated, and so on. This information is expected to be learned automatically from data. Our generative model deterministically maps a POS sequence to a bracketing via an undirected 1At this point, π refers to an arbitrary direction of the undirected tree u(x). latent-variable tree. The orientation of the tree is determined by a direction mapping hdir(u), which is fixed during learning and decoding. This means our decoder first identifies (given a POS sequence) an undirected tree, and then orients it by applying hdir on the resulting tree (see below). Define U to be the set of undirected latent trees where all internal nodes have degree exactly 3 (i.e. they correspond to binary bracketing), and in addition hdir(u) for any u ∈U is projective (explained in the hdir section). In addition, let T be the set of binary bracketings. The complete generative model that we follow is then: • Generate a tag sequence x = (x1, . . . , xℓ) • Decide on u(x) ∈U, the undirected latent tree that x maps to. • Set t ∈T by computing t = hdir(u). • Set θ ∈Θ by computing θ = θ(x). • Generate a tuple v = (w1, . . . , wℓ, z1, ..., zH) where wi ∈Rp, zj ∈Rm according to Eq. 1. See Figure 1 (left) for an example. The Direction Mapping hdir. Generating a bracketing via an undirected tree enables us to build on existing methods for structure learning of latent-tree graphical models (Choi et al., 2011; Anandkumar et al., 2011). Our learning algorithm focuses on recovering the undirected tree based for the generative model that was described above. This undirected tree is converted into a directed tree by applying hdir. The mapping hdir works in three steps: • It first chooses a top bracket ([1, R −1], [R, ℓ]) where R is the mid-point of the bracket and ℓis the length of the sentence. • It marks the edge ei,j that splits the tree according to the top bracket as the “root edge” (marked in red in Figure 1(center)) • It then creates t from u by directing the tree outward from ei,j as shown in Figure 1(center) 1064 The resulting t is a binary bracketing parse tree. As implied by the above definition of hdir, selecting which edge is the root can be interpreted as determining the top bracket of the constituent parse. For example, in Figure 1, the top bracket is ([1, 2], [3, 5]) = ([DT, NN], [VBD, DT, NN]). Note that the “root” edge ez1,z2 partitions the leaves into precisely this bracketing. As indicated in the above section, we restrict the set of undirected trees to be those such that after applying hdir the resulting t is projective i.e. there are no crossing brackets. In §4.1, we discuss an effective heuristic to find the top bracket without supervision. 3 Spectral Learning Algorithm based on Additive Tree Metrics Our goal is to recover t ∈T for tag sequence x using the data D = [(w(i), x(i))]N i=1. To get an intuition about the algorithm, consider a partition of the set of examples D into D(x) = {(w(i), x(i)) ∈ D|x(i) = x}, i.e. each section in the partition has an identical sequence of part of speech tags. Assume for this section |D(x)| is large (we address the data sparsity issue in §3.4). We can then proceed by learning how to map a POS sequence x to a tree t ∈T (through u ∈U) by focusing only on examples in D(x). Directly attempting to maximize the likelihood unfortunately results in an intractable optimization problem and greedy heuristics are often employed (Harmeling and Williams, 2011). Instead we propose a method that is provably consistent and returns a tree that can be mapped to a bracketing using hdir. If all the variables were observed, then the Chow-Liu algorithm (Chow and Liu, 1968) could be used to find the most likely tree structure u ∈ U. The Chow-Liu algorithm essentially computes the distances among all pairs of variables (the negative of the mutual information) and then finds the minimum cost tree. However, the fact that the zi are latent variables makes this strategy substantially more complicated. In particular, it becomes challenging to compute the distances among pairs of latent variables. What is needed is a “special” distance function that allows us to reverse engineer the distances among the latent variables given the distances among the observed variables. This is the key idea behind additive tree metrics that are the basis of our approach. In the following sections, we describe the key steps to our method. §3.1 and §3.2 largely describe existing background on additive tree metrics and latent tree structure learning, while §3.3 and §3.4 discuss novel aspects that are unique to our problem. 3.1 Additive Tree Metrics Let u(x) be the true undirected tree of sentence x and assume the nodes V to be indexed by [M] = {1, . . . , M} such that M = |V| = H + ℓ. Furthermore, let v ∈V refer to a node in the undirected tree (either observed or latent). We assume the existence of a distance function that allows us to compute distances between pairs of nodes. For example, as we see in §3.2 we will define the distance d(i, j) to be a function of the covariance matrix E[viv⊤ j |u(x), θ(x)]. Thus if vi and vj are both observed variables, the distance can be directly computed from the data. Moreover, the metrics we construct are such that they are tree additive, defined below: Definition 1 A function du(x) : [M]×[M] →R is an additive tree metric (Erd˜os et al., 1999) for the undirected tree u(x) if it is a distance metric,2 and furthermore, ∀i, j ∈[M] the following relation holds: du(x)(i, j) = X (a,b)∈pathu(x)(i,j) du(x)(a, b) (2) where pathu(x)(i, j) is the set of all the edges in the (undirected) path from i to j in the tree u(x). As we describe below, given the tree structure, the additive tree metric property allows us to compute “backwards” the distances among the latent variables as a function of the distances among the observed variables. Define D to be the M × M distance matrix among the M variables, i.e. Dij = du(x)(i, j). Let DWW , DZW (equal to D⊤ WZ), and DZZ indicate the word-word, latent-word and latent-latent sub-blocks of D respectively. In addition, since u(x) is assumed to be known from context, we denote du(x)(i, j) just by d(i, j). Given the fact that the distance between a pair of nodes is a function of the random variables they represent (according to the true model), only DWW can be empirically estimated from data. However, if the underlying tree structure is known, then Definition 1 can be leveraged to compute DZZ and DZW as we show below. 2This means that it satisfies d(i, j) = 0 if and only if i = j, the triangle inequality and is also symmetric. 1065 vj vi ei,j (a) vi ei,j vj (b) Figure 3: Two types of edges in general undirected latent trees. (a) leaf edge, (b) internal edge We first show how to compute d(i, j) for all i, j such that i and j are adjacent to each other in u(x), based only on observed nodes. It then follows that the other elements of the distance matrix can be computed based on Definition 1. To show how to compute distances between adjacent nodes, consider the two cases: (1) (i, j) is a leaf edge; (2) (i, j) is an internal edge. Case 1 (leaf edge, figure 3(a)) Assume without loss of generality that j is the leaf and i is an internal latent node. Then i must have exactly two other neighbors a ∈[M] and b ∈[M]. Let A denote the set of nodes that are closer to a than i and similarly let B denote the set of nodes that are closer to b than i. Let A∗and B∗denote all the leaves (word nodes) in A and B respectively. Then using path additivity (Definition 1), it can be shown that for any a∗∈A∗, b∗∈B∗it holds that: d(i, j) = 1 2 (d(j, a∗) + d(j, b∗) −d(a∗, b∗)) (3) Note that the right-hand side only depends on distances between observed random variables. Case 2 (internal edge, figure 3(b)) Both i and j are internal nodes. In this case, i has exactly two other neighbors a ∈[M] and b ∈[M], and similarly, j has exactly other two neighbors g ∈ [M] and h ∈[M]. Let A denote the set of nodes closer to a than i, and analogously for B, G, and H. Let A∗, B∗, G∗, and H∗refer to the leaves in A, B, G, and H respectively. Then for any a∗∈ A∗, b∗∈B∗, g∗∈G∗, and h∗∈H∗it can be shown that: d(i, j) = 1 4  d(a∗, g∗) + d(a∗, h∗) + d(b∗, g∗) +d(b∗, h∗) −2d(a∗, b∗) −2d(g∗, h∗)  (4) Empirically, one can obtain a more robust empirical estimate bd(i, j) by averaging over all valid choices of a∗, b∗in Eq. 3 and all valid choices of a∗, b∗, g∗, h∗in Eq. 4 (Desper and Gascuel, 2005). 3.2 Constructing a Spectral Additive Metric In constructing our distance metric, we begin with the following assumption on the distribution in Eq. 1 (analogous to the assumptions made in Anandkumar et al., 2011). Assumption 1 (Linear, Rank m, Means) E[zi|πx(zi), x] = A(zi|zπx(zi),x)πx(zi) ∀i ∈[H] where A(zi|πx(zi),x) ∈Rm×m has rank m. E[wi|πx(wi), x] = C(wi|πx(wi),x)πx(wi) ∀i ∈[ℓ(x)] where C(wi|πx(wi),x) ∈Rp×m has rank m. Also assume that E[ziz⊤ i |x] has rank m ∀i ∈ [H]. Note that the matrices A and C are a direct function of θ(x), but we do not specify a model family for θ(x). The only restriction is in the form of the above assumption. If wi and zi were discrete, represented as binary vectors, the above assumption would correspond to requiring all conditional probability tables in the latent tree to have rank m. Assumption 1 allows for the wi to be high dimensional features, as long as the expectation requirement above is satisfied. Similar assumptions are made with spectral parameter learning methods e.g. Hsu et al. (2009), Bailly et al. (2009), Parikh et al. (2011), and Cohen et al. (2012). Furthermore, Assumption 1 makes it explicit that regardless of the size of p, the relationships among the variables in the latent tree are restricted to be of rank m, and are thus low rank since p > m. To leverage this low rank structure, we propose using the following additive metric, a normalized variant of that in Anandkumar et al. (2011): dspectral(i, j) = −log Λm(Σx(i, j)) +1 2 log Λm(Σx(i, i)) + 1 2 log Λm(Σx(j, j)) (5) where Λm(A) denotes the product of the top m singular values of A and Σx(i, j) := E[viv⊤ j |x], i.e. the uncentered cross-covariance matrix. We can then show that this metric is additive: Lemma 1 If Assumption 1 holds then, dspectral is an additive tree metric (Definition 1). A proof is in the supplementary for completeness. From here, we use d to denote dspectral, since that is the metric we use for our learning algorithm. 1066 3.3 Recovering the Minimal Projective Latent Tree It has been shown (Rzhetsky and Nei, 1993) that for any additive tree metric, u(x) can be recovered by solving arg minu∈U c(u) for c(u): c(u) = X (i,j)∈Eu d(i, j). (6) where Eu is the set of pairs of nodes which are adjacent to each other in u and d(i, j) is computed using Eq. 3 and Eq. 4. Note that the metric d we use in defining c(u) is based on the expectations from the true distribution. In practice, the true distribution is unknown, and therefore we use an approximation for the distance metric ˆd. As we discussed in §3.1 all elements of the distance matrix are functions of observable quantities if the underlying tree u is known. However, only the word-word sub-block DWW can be directly estimated from the data without knowledge of the tree structure. This subtlety makes solving the minimization problem in Eq. 6 NP-hard (Desper and Gascuel, 2005) if u is allowed to be an arbitrary undirected tree. However, if we restrict u to be in U, as we do in the above, then maximizing ˆc(u) over U can be solved using the bilexical parsing algorithm from Eisner and Satta (1999). This is because the computation of the other sub-blocks of the distance matrix only depend on the partitions of the nodes shown in Figure 3 into A, B, G, and H, and not on the entire tree structure. Therefore, the procedure to find a bracketing for a given POS tag x is to first estimate the distance matrix sub-block b DWW from raw text data (see §3.4), and then solve the optimization problem arg minu∈U ˆc(u) using a variant of the EisnerSatta algorithm where ˆc(u) is identical to c(u) in Eq. 6, with d replaced with ˆd. Summary. We first defined a generative model that describes how a sentence, its sequence of POS tags, and its bracketing is generated (§2.3). First an undirected u ∈U is generated (only as a function of the POS tags), and then u is mapped to a bracketing using a direction mapping hdir. We then showed that we can define a distance metric between nodes in the undirected tree, such that minimizing it leads to a recovery of u. This distance metric can be computed based only on the text, without needing to identify the latent information (§3.2). If the true distance metric is known, Algorithm 1 The learning algorithm for finding the latent structure from a set of examples (w(i), x(i)), i ∈[N]. Inputs: Set of examples (w(i), x(i)) for i ∈[N], a kernel Kγ(j, k, j′, k′|x, x′), an integer m Data structures: For each i ∈ [N], j, k ∈ ℓ(x(i)) there is a (uncentered) covariance matrix bΣx(i)(j, k) ∈Rp×p, and a distance ˆdspectral(j, k). Algorithm: (Covariance estimation) ∀i ∈[N], j, k ∈ℓ(x(i)) • Let Cj′,k′|i′ = w(i′) j′ (w(i′) k′ )⊤, kj,k,j′,k′,i,i′ = Kγ(j, k, j′, k′|x(i), x(i′)) and ℓi′ = ℓ(x(i′)), and estimate each p × p covariance matrix as: bΣx(j, k) = PN i′=1 Pℓi′ j′=1 Pℓi′ k′=1 kj,k,j′,k′,i,i′Cj′,k′|i′ PN i′=1 Pℓi′ j′=1 Pℓi′ k′=1 kj,k,j′,k′,i,i′ • Compute ˆdspectral(j, k) ∀j, k ∈ℓ(x(i)) using Eq. 5. (Uncover structure) ∀i ∈[N] • Find ˆu(i) = arg minu∈U ˆc(u), and for the ith example, return the structure hdir(ˆu(i)). with respect to the true distribution that generates the words in a sentence, then u can be fully recovered by optimizing the cost function c(u). However, in practice the distance metric must be estimated from data, as discussed below. 3.4 Estimation of d from Sparse Data We now address the data sparsity problem, in particular that D(x) can be very small, and therefore estimating d for each POS sequence separately can be problematic.3 In order to estimate d from data, we need to estimate the covariance matrices Σx(i, j) (for i, j ∈ {1, . . . , ℓ(x)}) from Eq. 5. To give some motivation to our solution, consider estimating the covariance matrix Σx(1, 2) for the tag sequence x = (DT1, NN2, VBD3, DT4, NN5). D(x) may be insufficient for an accurate empirical es3This data sparsity problem is quite severe – for example, the Penn treebank (Marcus et al., 1993) has a total number of 43,498 sentences, with 42,246 unique POS tag sequences, averaging |D(x)| to be 1.04. 1067 timate. However, consider another sequence x′ = (RB1, DT2, NN3, VBD4, DT5, ADJ6, NN7). Although x and x′ are not identical, it is likely that Σx′(2, 3) is similar to Σx(1, 2) because the determiner and the noun appear in similar syntactic context. Σx′(5, 7) also may be somewhat similar, but Σx′(2, 7) should not be very similar to Σx(1, 2) because the noun and the determiner appear in a different syntactic context. The observation that the covariance matrices depend on local syntactic context is the main driving force behind our solution. The local syntactic context acts as an “anchor,” which enhances or replaces a word index in a sentence with local syntactic context. More formally, an anchor is a function G that maps a word index j and a sequence of POS tags x to a local context G(j, x). The anchor we use is G(j, x) = (j, xj). Then, the covariance matrices Σx are estimated using kernel smoothing (Hastie et al., 2009), where the smoother tests similarity between the different anchors G(j, x). The full learning algorithm is given in Figure 1. The first step in the algorithm is to estimate the covariance matrix block bΣx(i)(j, k) for each training example x(i) and each pair of preterminal positions (j, k) in x(i). Instead of computing this block by computing the empirical covariance matrix for positions (j, k) in the data D(x), the algorithm uses all of the pairs (j′, k′) from all of N training examples. It averages the empirical covariance matrices from these contexts using a kernel weight, which gives a similarity measure for the position (j, k) in x(i) and (j′, k′) in another example x(i′). γ is the kernel “bandwidth”, a user-specified parameter that controls how inclusive the kernel will be with respect to examples in D (see § 4.1 for a concrete example). Note that the learning algorithm is such that it ensures that bΣx(i)(j, k) = bΣx(i′)(j′, k′) if G(j, x(i)) = G(j′, x(i′)) and G(k, x(i)) = G(k′, x(i′)). Once the empirical estimates for the covariance matrices are obtained, a variant of the Eisner-Satta algorithm is used, as mentioned in §3.3. 3.5 Theoretical Guarantees Our main theoretical guarantee is that Algorithm 1 will recover the correct tree u ∈U with high probability, if the given top bracket is correct and if we obtain enough examples (w(i), x(i)) from the model in §2. We give the theorem statement below. The constants lurking in the O-notation and the full proof are in the supplementary. Denote σx(j, k)(r) as the rth singular value of Σx(j, k). Let σ∗(x) := minj,k∈ℓ(x) min σx(j, k)(m) . Theorem 1 Define ˆu as the estimated tree for tag sequence x and u(x) as the correct tree. Let △(x) := min u′∈U:u′̸=u(x)(c(u(x)) −c(u′))/(8|ℓ(x)|) Assume that N ≥O   m2 log  p2ℓ(x)2 δ  min(σ∗(x)2△(x)2, σ∗(x)2)νx(γ)2   Then with probability 1 −δ, ˆu = u(x). where νx(γ), defined in the supplementary, is a function of the underlying distribution over the tag sequences x and the kernel bandwidth γ. Thus, the sample complexity of our approach depends on the dimensionality of the latent and observed states (m and p), the underlying singular values of the cross-covariance matrices (σ∗(x)) and the difference in the cost of the true tree compared to the cost of the incorrect trees (△(x)). 4 Experiments We report results on three different languages: English, German, and Chinese. For English we use the Penn treebank (Marcus et al., 1993), with sections 2–21 for training and section 23 for final testing. For German and Chinese we use the Negra treebank and the Chinese treebank respectively and the first 80% of the sentences are used for training and the last 20% for testing. All punctuation from the data is removed.4 We primarily compare our method to the constituent-context model (CCM) of Klein and Manning (2002). We also compare our method to the algorithm of Seginer (2007). 4.1 Experimental Settings Top bracket heuristic Our algorithm requires the top bracket in order to direct the latent tree. In practice, we employ the following heuristic to find the bracket using the following three steps: • If there exists a comma/semicolon/colon at index i that has at least a verb before i and both a noun followed by a verb after i, then return ([0, i −1], [i, ℓ(x)]) as the top bracket. (Pick the rightmost comma/semicolon/colon if multiple satisfy the criterion). 4We make brief use of punctuation for our top bracket heuristic detailed below before removing it. 1068 Length CCM CCM-U CCM-OB CCM-UB ≤10 72.5 57.1 58.2 62.9 ≤15 54.1 36 24 23.7 ≤20 50 34.7 19.3 19.1 ≤25 47.2 30.7 16.8 16.6 ≤30 44.8 29.6 15.3 15.2 ≤40 26.3 13.5 13.9 13.8 Table 1: Comparison of different CCM variants on English (training). U stands for universal POS tagset, OB stands for conjoining original POS tags with Brown clusters and UB stands for conjoining universal POS tags with Brown clusters. The best setting is just the vanilla setting, CCM. • Otherwise find the first non-participle verb (say at index j) and return ([0, j −1], [j, ℓ(x)]). • If no verb exists, return ([0, 1], [1, ℓ(x)]). Word embeddings As mentioned earlier, each wi can be an arbitrary feature vector. For all languages we use Brown clustering (Brown et al., 1992) to construct a log(C) + C feature vector where the first log(C) elements indicate which mergable cluster the word belongs to, and the last C elements indicate the cluster identity. For English, more sophisticated word embeddings are easily obtainable, and we experiment with neural word embeddings Turian et al. (2010) of length 50. We also explored two types of CCA embeddings: OSCCA and TSCCA, given in Dhillon et al. (2012). The OSCCA embeddings behaved better, so we only report its results. Choice of kernel For our experiments, we use the kernel Kγ(j, k, j′, k′|x, x′) = max  0, 1 −κ(j, k, j′, k′|x, x′) γ  where γ denotes the user-specified bandwidth, and κ(j, k, j′, k′|x, x′) = |j −k| −|j′ −k′| |j −k| + |j′ −k′| if x(j) = x(j′) and x(k′) = x(k), and sign(j − k) = sign(j′ −k′) (and ∞otherwise). The kernel is non-zero if and only if the tags at position j and k in x are identical to the ones in position j′ and k′ in x′, and if the direction between j and k is identical to the one between j′ and k′. Note that the kernel is not binary, as opposed to the theoretical kernel in the supplementary material. Our experiments show that using a non-zero value different than 1 that is a function of the distance between j and k compared to the distance between j′ and k′ does better in practice. Choice of data For CCM, we found that if the full dataset (all sentence lengths) is used in training, then performance degrades when evaluating on sentences of length ≤10. We therefore restrict the data used with CCM to sentences of length ≤ℓ, where ℓis the maximal sentence length being evaluated. This does not happen with our algorithm, which manages to leverage lexical information whenever more data is available. We therefore use the full data for our method for all lengths. We also experimented with the original POS tags and the universal POS tags of Petrov et al. (2011). Here, we found out that our method does better with the universal part of speech tags. For CCM, we also experimented with the original parts of speech, universal tags (CCM-U), the cross-product of the original parts of speech with the Brown clusters (CCM-OB), and the crossproduct of the universal tags with the Brown clusters (CCM-UB). The results in Table 1 indicate that the vanilla setting is the best for CCM. Thus, for all results, we use universal tags for our method and the original POS tags for CCM. We believe that our approach substitutes the need for fine-grained POS tags with the lexical information. CCM, on the other hand, is fully unlexicalized. Parameter Selection Our method requires two parameters, the latent dimension m and the bandwidth γ. CCM also has two parameters, the number of extra constituent/distituent counts used for smoothing. For both methods we chose the best parameters for sentences of length ℓ≤10 on the English Penn Treebank (training) and used this set for all other experiments. This resulted in m = 7, γ = 0.4 for our method and 2, 8 for CCM’s extra constituent/distituent counts respectively. We also tried letting CCM choose different hyperparameters for different sentence lengths based on dev-set likelihood, but this gave worse results than holding them fixed. 4.2 Results Test I: Accuracy Table 2 summarizes our results. CCM is used with the initializer proposed in Klein and Manning (2002).5 NN, CC, and BC indicate the performance of our method for neural embeddings, CCA embeddings, and Brown clustering respectively, using the heuristic for hdir de5We used the implementation available at http://tinyurl.com/lhwk5n6. 1069 ℓ English German Chinese NN-O NN CC-O CC BC-O BC CCM BC-O BC CCM BC-O BC CCM train ≤10 70.9 69.2 70.4 68.7 71.1 69.3 72.5 64.6 59.9 62.6 64.9 57.3 46.1 ≤20 55.1 53.5 53.2 51.6 53.0 51.5 50 52.7 48.7 47.9 51.4 46 22.4 ≤40 46.1 44.5 43.6 41.9 43.3 41.8 26.3 46.7 43.6 19.8 42.6 38.6 15 test ≤10 69.2 66.7 68.3 65.5 68.9 66.1 70.5 66.4 61.6 64.7 58.0 53.2 40.7 ≤15 60.3 58.3 58.6 56.4 58.6 56.5 53.8 57.5 53.5 49.6 54.3 49.4 35.9 ≤20 54.1 52.3 52.3 50.3 51.9 50.2 50.4 52.8 49.2 48.9 49.7 45.5 20.1 ≤25 50.8 49.0 48.6 46.6 48.3 46.6 47.4 50.0 46.8 45.6 46.7 42.7 17.8 ≤30 48.1 46.3 45.6 43.7 45.4 43.8 44.9 48.3 45.4 21.9 44.6 40.7 16.1 ≤40 45.5 43.8 43.0 41.1 42.7 41.1 26.1 46.9 44.1 20.1 42.2 38.6 14.3 Table 2: F1 bracketing measure for the test sets and train sets in three languages. NN, CC, and BC indicate the performance of our method for neural embeddings, CCA embeddings, and Brown clustering respectively, using the heuristic for hdir described in § 4.1. NN-O, CC-O, and BC-O indicate that the oracle (i.e. true top bracket) was used for hdir. 0 5 10 15 20 25 30 35 20-30 31-40 41-50 51-60 61-70 71-80 Frequency Bracketing F1 CCM Random Restarts (Length <= 10) Figure 4: Histogram showing performance of CCM across 100 random restarts for sentences of length ≤10. scribed in § 4.1. NN-O, CC-O, and BC-O indicate that the oracle (i.e. true top bracket) was used for hdir. For our method, test set results can be obtained by using Algorithm 1 (except the distances are computed using the training data). For English, while CCM behaves better for short sentences (ℓ≤10), our algorithm is more robust with longer sentences. This is especially noticeable for length ≤40, where CCM breaks down and our algorithm is more stable. We find that the neural embeddings modestly outperform the CCA and Brown cluster embeddings. The results for German are similar, except CCM breaks down earlier at sentences of ℓ≤30. For Chinese, our method substantially outperforms CCM for all lengths. Note that CCM performs very poorly, obtaining only around 20% accuracy even for sentences of ℓ≤20. We didn’t have neural embeddings for German and Chinese (which worked best for English) and thus only used Brown cluster embeddings. For English, the disparity between NN-O (oracle top bracket) and NN (heuristic top bracket) is rather low suggesting that our top bracket heuristic is rather effective. However, for German and Chinese note that the “BC-O” performs substantially better, suggesting that if we had a better top bracket heuristic our performance would increase. Test II: Sensitivity to initialization The EM algorithm with the CCM requires very careful initialization, which is described in Klein and Manning (2002). If, on the other hand, random initialization is used, the variance of the performance of the CCM varies greatly. Figure 4 shows a histogram of the performance level for sentences of length ≤10 for different random initializers. As one can see, for some restarts, CCM obtains accuracies lower than 30% due to local optima. Our method does not suffer from local optima and thus does not require careful initialization. Test III: Comparison to Seginer’s algorithm Our approach is not directly comparable to Seginer’s because he uses punctuation, while we use POS tags. Using Seginer’s parser we were able to get results on the training sets. On English: 75.2% (ℓ≤10), 64.2% (ℓ≤20), 56.7% (ℓ≤40). On German: 57.8% (ℓ≤10), 45.0% (ℓ≤20), and 39.9% (ℓ≤40). On Chinese: 56.6% (ℓ≤10), 45.1% (ℓ≤20), and 38.9% (ℓ≤40). Thus, while Seginer’s method performs better on English, our approach performs 2-3 points better on German, and both methods give similar performance on Chinese. 5 Conclusion We described a spectral approach for unsupervised constituent parsing that comes with theoretical guarantees on latent structure recovery. Empirically, our algorithm performs favorably to the CCM of Klein and Manning (2002) without the need for careful initialization. Acknowledgements: This work is supported by NSF IIS1218282, NSF IIS1111142, NIH R01GM093156, and the NSF Graduate Research Fellowship Program under Grant No. 0946825 (NSF Fellowship to APP). 1070 References A. Anandkumar, K. Chaudhuri, D. Hsu, S. M. Kakade, L. Song, and T. Zhang. 2011. Spectral methods for learning multivariate latent tree structure. arXiv preprint arXiv:1107.1283. R. Bailly, F. Denis, and L. Ralaivola. 2009. Grammatical inference as a principal component analysis problem. In Proceedings of ICML. R. Bailly, X. Carreras, F. M. Luque, and A. Quattoni. 2013. Unsupervised spectral learning of WCFG as low-rank matrix completion. In Proceedings of EMNLP. P. F. Brown, P.V. Desouza, R.L. Mercer, V.J.D. Pietra, and J.C. Lai. 1992. Class-based n-gram models of natural language. Computational linguistics, 18(4):467–479. O. P. Buneman. 1971. The recovery of trees from measures of dissimilarity. Mathematics in the archaeological and historical sciences. P. Buneman. 1974. A note on the metric properties of trees. Journal of Combinatorial Theory, Series B, 17(1):48–50. M.J. Choi, V. YF Tan, A. Anandkumar, and A.S. Willsky. 2011. Learning latent tree graphical models. The Journal of Machine Learning Research, 12:1771–1812. C. K. Chow and C. N. Liu. 1968. Approximating Discrete Probability Distributions With Dependence Trees. IEEE Transactions on Information Theory, IT-14:462–467. S. B. Cohen and N. A. Smith. 2009. Shared logistic normal distributions for soft parameter tying in unsupervised grammar induction. In Proceedings of HLT-NAACL. S. B. Cohen and N. A. Smith. 2012. Empirical risk minimization for probabilistic grammars: Sample complexity and hardness of learning. Computational Linguistics, 38(3):479–526. S. B. Cohen, K. Stratos, M. Collins, D. P. Foster, and L. Ungar. 2012. Spectral learning of latent-variable PCFGs. In Proceedings of ACL. R. Desper and O. Gascuel. 2005. The minimum evolution distance-based approach to phylogenetic inference. Mathematics of evolution and phylogeny, pages 1–32. P. S. Dhillon, J. Rodu, D. P. Foster, and L. H. Ungar. 2012. Two step cca: A new spectral method for estimating vector models of words. In Proceedings of ICML. J. Eisner and G. Satta. 1999. Efficient parsing for bilexical context-free grammars and head automaton grammars. In Proceedings of ACL. P. Erd˜os, M. Steel, L. Sz´ekely, and T. Warnow. 1999. A few logs suffice to build (almost) all trees: Part ii. Theoretical Computer Science, 221(1):77–118. J. Gillenwater, K. Ganchev, J. Grac¸a, F. Pereira, and B. Taskar. 2010. Sparsity in dependency grammar induction. In Proceedings of ACL. K. Gimpel and N.A. Smith. 2012. Concavity and initialization for unsupervised dependency parsing. In Proceedings of NAACL. D. Golland, J. DeNero, and J. Uszkoreit. 2012. A feature-rich constituent context model for grammar induction. In Proceedings of ACL. M. Gormley and J. Eisner. 2013. Nonconvex global optimization for latent-variable models. In Proceedings of ACL. S. Harmeling and C. KI Williams. 2011. Greedy learning of binary latent trees. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 33(6):1087–1097. T. Hastie, R. Tibshirani, and J. Friedman. 2009. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer Series in Statistics. Springer Verlag. W. P. Headden, M. Johnson, and D. McClosky. 2009. Improving unsupervised dependency parsing with richer contexts and smoothing. In Proceedings of NAACL-HLT. D. Hsu, S. Kakade, and T. Zhang. 2009. A spectral algorithm for learning hidden Markov models. In Proceedings of COLT. D. Hsu, S. M. Kakade, and P. Liang. 2012. Identifiability and unmixing of latent parse trees. arXiv preprint arXiv:1206.3137. M. Ishteva, H. Park, and L. Song. 2012. Unfolding latent tree structures using 4th order tensors. arXiv preprint arXiv:1210.1258. F. Jelinek, J. D. Lafferty, and R. L. Mercer. 1992. Basic methods of probabilistic context free grammars. Springer. D. Klein and C. D. Manning. 2002. A generative constituent-context model for improved grammar induction. In Proceedings of ACL. M. Kolar, A. P. Parikh, and E. P. Xing. 2010a. On sparse nonparametric conditional covariance selection. In Proceedings of ICML. M. Kolar, L. Song, A. Ahmed, and E. P. Xing. 2010b. Estimating time-varying networks. The Annals of Applied Statistics, 4(1):94–123. M. P. Marcus, B. Santorini, and M. A. Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn treebank. Computational Linguistics, 19:313–330. 1071 A.P. Parikh, L. Song, and E.P. Xing. 2011. A spectral algorithm for latent tree graphical models. In Proceedings of ICML. S. Petrov, D. Das, and R. McDonald. 2011. A universal part-of-speech tagset. ArXiv:1104.2086. A. Rzhetsky and M. Nei. 1993. Theoretical foundation of the minimum-evolution method of phylogenetic inference. Molecular Biology and Evolution, 10(5):1073–1095. N. Saitou and M. Nei. 1987. The neighbor-joining method: a new method for reconstructing phylogenetic trees. Molecular biology and evolution, 4(4):406–425. Y. Seginer. 2007. Fast unsupervised incremental parsing. In Proceedings of ACL. N. A. Smith and J. Eisner. 2005. Contrastive estimation: Training log-linear models on unlabeled data. In Proceedings of ACL. L. Song, A.P. Parikh, and E.P. Xing. 2011. Kernel embeddings of latent tree graphical models. In Proceedings of NIPS. V. I. Spitkovsky, H. Alshawi, and D. Jurafsky. 2010a. From baby steps to leapfrog: how less is more in unsupervised dependency parsing. In Proceedings of NAACL. V. I. Spitkovsky, H. Alshawi, D. Jurafsky, and C. D. Manning. 2010b. Viterbi training improves unsupervised dependency parsing. In Proceedings of CoNLL. V. I. Spitkovsky, H. Alshawi, and D. Jurafsky. 2013. Breaking out of local optima with count transforms and model recombination: A study in grammar induction. In Proceedings of EMNLP. J. P. Turian, L.-A. Ratinov, and Y. Bengio. 2010. Word representations: A simple and general method for semi-supervised learning. In Proceedings of ACL. S. Zhou, J. Lafferty, and L. Wasserman. 2010. Time varying undirected graphs. Machine Learning, 80(2-3):295–319. 1072
2014
100
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1073–1083, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Weak semantic context helps phonetic learning in a model of infant language acquisition Stella Frank [email protected] ILCC, School of Informatics University of Edinburgh Edinburgh, EH8 9AB, UK Naomi H. Feldman [email protected] Department of Linguistics University of Maryland College Park, MD, 20742, USA Sharon Goldwater [email protected] ILCC, School of Informatics University of Edinburgh Edinburgh, EH8 9AB, UK Abstract Learning phonetic categories is one of the first steps to learning a language, yet is hard to do using only distributional phonetic information. Semantics could potentially be useful, since words with different meanings have distinct phonetics, but it is unclear how many word meanings are known to infants learning phonetic categories. We show that attending to a weaker source of semantics, in the form of a distribution over topics in the current context, can lead to improvements in phonetic category learning. In our model, an extension of a previous model of joint word-form and phonetic category inference, the probability of word-forms is topic-dependent, enabling the model to find significantly better phonetic vowel categories and word-forms than a model with no semantic knowledge. 1 Introduction Infants begin learning the phonetic categories of their native language in their first year (Kuhl et al., 1992; Polka and Werker, 1994; Werker and Tees, 1984). In theory, semantic information could offer a valuable cue for phoneme induction1 by helping infants distinguish between minimal pairs, as linguists do (Trubetzkoy, 1939). However, due to a widespread assumption that infants do not know the meanings of many words at the age when they are learning phonetic categories (see Swingley, 2009 for a review), most recent models of early phonetic category acquisition have explored the phonetic learning problem in the absence of semantic information (de Boer and Kuhl, 2003; Dillon et al., 2013; 1The models in this paper do not distinguish between phonetic and phonemic categories, since they do not capture phonological processes (and there are also none present in our synthetic data). We thus use the terms interchangeably. Feldman et al., 2013a; McMurray et al., 2009; Vallabha et al., 2007). Models without any semantic information are likely to underestimate infants’ ability to learn phonetic categories. Infants learn language in the wild, and quickly attune to the fact that words have (possibly unknown) meanings. The extent of infants’ semantic knowledge is not yet known, but existing evidence shows that six-month-olds can associate some words with their referents (Bergelson and Swingley, 2012; Tincoff and Jusczyk, 1999, 2012), leverage non-acoustic contexts such as objects or articulations to distinguish similar sounds (Teinonen et al., 2008; Yeung and Werker, 2009), and map meaning (in the form of objects or images) to new word-forms in some laboratory settings (Friedrich and Friederici, 2011; Gogate and Bahrick, 2001; Shukla et al., 2011). These findings indicate that young infants are sensitive to co-occurrences between linguistic stimuli and at least some aspects of the world. In this paper we explore the potential contribution of semantic information to phonetic learning by formalizing a model in which learners attend to the word-level context in which phones appear (as in the lexical-phonetic learning model of Feldman et al., 2013a) and also to the situations in which word-forms are used. The modeled situations consist of combinations of categories of salient activities or objects, similar to the activity contexts explored by Roy et al. (2012), e.g.,‘getting dressed’ or ‘eating breakfast’. We assume that child learners are able to infer a representation of the situational context from their non-linguistic environment. However, in our simulations we approximate the environmental information by running a topic model (Blei et al., 2003) over a corpus of childdirected speech to infer a topic distribution for each situation. These topic distributions are then used as input to our model to represent situational contexts. The situational information in our model is simi1073 lar to that assumed by theories of cross-situational word learning (Frank et al., 2009; Smith and Yu, 2008; Yu and Smith, 2007), but our model does not require learners to map individual words to their referents. Even in the absence of word-meaning mappings, situational information is potentially useful because similar-sounding words uttered in similar situations are more likely to be tokens of the same lexeme (containing the same phones) than similarsounding words uttered in different situations. In simulations of vowel learning, inspired by Vallabha et al. (2007) and Feldman et al. (2013a), we show a clear improvement over previous models in both phonetic and lexical (word-form) categorization when situational context is used as an additional source of information. This improvement is especially noticeable when the word-level context is providing less information, arguably the more realistic setting. These results demonstrate that relying on situational co-occurrence can improve phonetic learning, even if learners do not yet know the meanings of individual words. 2 Background and overview of models Infants attend to distributional characteristics of their input (Maye et al., 2002, 2008), leading to the hypothesis that phonetic categories could be acquired on the basis of bottom-up distributional learning alone (de Boer and Kuhl, 2003; Vallabha et al., 2007; McMurray et al., 2009). However, this would require sound categories to be well separated, which often is not the case—for example, see Figure 1, which shows the English vowel space that is the focus of this paper. Recent work has investigated whether infants could overcome such distributional ambiguity by incorporating top-down information, in particular, the fact that phones appear within words. At six months, infants begin to recognize word-forms such as their name and other frequently occurring words (Mandel et al., 1995; Jusczyk and Hohne, 1997), without necessarily linking a meaning to these forms. This “protolexicon” can help differentiate phonetic categories by adding word contexts in which certain sound categories appear (Swingley, 2009; Feldman et al., 2013b). To explore this idea further, Feldman et al. (2013a) implemented the Lexical-Distributional (LD) model, which jointly learns a set of phonetic vowel categories and a set of word-forms containing those categories. Simulations showed that the use of lexical context greatly 500 1000 1500 2000 2500 3000 3500 F2 200 400 600 800 1000 1200 F1 oa uw aw oo uh ah er eh ae ih ei iy Figure 1: The English vowel space (generated from Hillenbrand et al. (1995), see Section 6.2), plotted using the first two formants. improved phonetic learning. Our own Topic-Lexical-Distributional (TLD) model extends the LD model to include an additional type of context: the situations in which words appear. To motivate this extension and clarify the differences between the models, we now provide a high-level overview of both models; details are given in Sections 3 and 4. 2.1 Overview of LD model Both the LD and TLD models are computationallevel models of phonetic (specifically, vowel) categorization where phones (vowels) are presented to the model in the context of words.2 The task is to infer a set of phonetic categories and a set of lexical items on the basis of the data observed for each word token xi. In the original LD model, the observations for token xi are its frame fi, which consists of a list of consonants and slots for vowels, and the list of vowel tokens wi. (The TLD model includes additional observations, described below.) A single vowel token, wij, is a two dimensional vector representing the first two formants (peaks in the frequency spectrum, ordered from lowest to highest). For example, a token of the word kitty would have the frame fi = k t , containing two consonant phones, /k/ and /t/, with two vowel phone slots in between, and two vowel formant vectors, 2For a related model that also tackles the word segmentation problem, see Elsner et al. (2013). In a model of phonological learning, Fourtassi and Dupoux (submitted) show that semantic context information similar to that used here remains useful despite segmentation errors. 1074 wi0 = [464, 2294] and wi1 = [412, 2760].3 Given the data, the model must assign each vowel token to a vowel category, wij = c. Both the LD and the TLD models do this using intermediate lexemes, ℓ, which contain vowel category assignments, vℓj = c, as well as a frame fℓ. If a word token is assigned to a lexeme, xi = ℓ, the vowels within the word are assigned to that lexeme’s vowel categories, wij = vℓj = c.4 The word and lexeme frames must match, fi = fℓ. Lexical information helps with phonetic categorization because it can disambiguate highly overlapping categories, such as the ae and eh categories in Figure 1. A purely distributional learner who observes a cluster of data points in the ae-eh region is likely to assume all these points belong to a single category because the distributions of the categories are so similar. However, a learner who attends to lexical context will notice a difference: contexts that only occur with ae will be observed in one part of the ae-eh region, while contexts that only occur with eh will be observed in a different (though partially overlapping) space. The learner then has evidence of two different categories occurring in different sets of lexemes. Simulations with the LD model show that using lexical information to constrain phonetic learning can greatly improve categorization accuracy (Feldman et al., 2013a), but it can also introduce errors. When two word tokens contain the same consonant frame but different vowels (i.e., minimal pairs), the model is more likely to categorize those two vowels together. Thus, the model has trouble distinguishing minimal pairs. Although young children also have trouble with minimal pairs (Stager and Werker, 1997; Thiessen, 2007), the LD model may overestimate the degree of the problem. We hypothesize that if a learner is able to associate words with the contexts of their use (as children likely are), this could provide a weak source of information for disambiguating minimal pairs even without knowing their exact meanings. That is, if the learner hears kV1t and kV2t in different situational contexts, they are likely to be different lexical items (and V1 and V2 different phones), despite the lexical similarity between them. 3In simulations we also experiment with frames in which consonants are not represented perfectly. 4The notation is overloaded: wij refers both to the vowel formants and the vowel category assignments, and xi refers to both the token identity and its assignment to a lexeme. 2.2 Overview of TLD model To demonstrate the benefit of situational information, we develop the Topic-Lexical-Distributional (TLD) model, which extends the LD model by assuming that words appear in situations analogous to documents in a topic model. Each situation h is associated with a mixture of topics θh, which is assumed to be observed. Thus, for the ith token in situation h, denoted xhi, the observed data will be its frame fhi, vowels whi, and topic vector θh. From an acquisition perspective, the observed topic distribution represents the child’s knowledge of the context of the interaction: she can distinguish bathtime from dinnertime, and is able to recognize that some topics appear in certain contexts (e.g. animals on walks, vegetables at dinnertime) and not in others (few vegetables appear at bathtime). We assume that the child would learn these topics from observing the world around her and the co-occurrences of entities and activities in the world. Within any given situation, there might be a mixture of different (actual or possible) topics that are salient to the child. We assume further that as the child learns the language, she will begin to associate specific words with each topic as well. Thus, in the TLD model, the words used in a situation are topic-dependent, implying meaning, but without pinpointing specific referents. Although the model observes the distribution of topics in each situation (corresponding to the child observing her non-linguistic environment), it must learn to associate each (phonetically and lexically ambiguous) word token with a particular topic from that distribution. The occurrence of similar-sounding words in different situations with mostly non-overlapping topics will provide evidence that those words belong to different topics and that they are therefore different lexemes. Conversely, potential minimal pairs that occur in situations with similar topic distributions are more likely to belong to the same topic and thus the same lexeme. Although we assume that children infer topic distributions from the non-linguistic environment, we will use transcripts from CHILDES to create the word/phone learning input for our model. These transcripts are not annotated with environmental context, but Roy et al. (2012) found that topics learned from similar transcript data using a topic model were strongly correlated with immediate activities and contexts. We therefore obtain the topic distributions used as input to the TLD model by 1075 training an LDA topic model (Blei et al., 2003) on a superset of the child-directed transcript data we use for lexical-phonetic learning, dividing the transcripts into small sections (the ‘documents’ in LDA) that serve as our distinct situations h. As noted above, the learned document-topic distributions θ are treated as observed variables in the TLD model to represent the situational context. The topic-word distributions learned by LDA are discarded, since these are based on the (correct and unambiguous) words in the transcript, whereas the TLD model is presented with phonetically ambiguous versions of these word tokens and must learn to disambiguate them and associate them with topics. 3 Lexical-Distributional Model In this section we describe more formally the generative process for the LD model (Feldman et al., 2013a), a joint Bayesian model over phonetic categories and a lexicon, before describing the TLD extension in the following section. The set of phonetic categories and the lexicon are both modeled using non-parametric Dirichlet Process priors, which return a potentially infinite number of categories or lexemes. A DP is parametrized as DP(α, H), where α is a real-valued hyperparameter and H is a base distribution. H may be continuous, as when it generates phonetic categories in formant space, or discrete, as when it generates lexemes as a list of phonetic categories. A draw from a DP, G ∼DP(α, H), returns a distribution over a set of draws from H, i.e., a discrete distribution over a set of categories or lexemes generated by H. In the mixture model setting, the category assignments are then generated from G, with the datapoints themselves generated by the corresponding components from H. If H is infinite, the support of the DP is likewise infinite. During inference, we marginalize over G. 3.1 Phonetic Categories: IGMM Following previous models of vowel learning (de Boer and Kuhl, 2003; Vallabha et al., 2007; McMurray et al., 2009; Dillon et al., 2013) we assume that vowel tokens are drawn from a Gaussian mixture model. The Infinite Gaussian Mixture Model (IGMM) (Rasmussen, 2000) includes a DP prior, as described above, in which the base distribution HC generates multivariate Gaussians drawn from a Normal Inverse-Wishart prior.5 Each observation, a formant vector wij, is drawn from the Gaussian corresponding to its category assignment cij: µc, Σc ∼HC = NIW(µ0, Σ0, ν0) (1) GC ∼DP(αc, HC) (2) cij ∼GC (3) wij|cij = c ∼N(µc, Σc) (4) The above model generates a category assignment cij for each vowel token wij. This is the baseline IGMM model, which clusters vowel tokens using bottom-up distributional information only; the LD model adds top-down information by assigning categories in the lexicon, rather than on the token level. 3.2 Lexicon In the LD model, vowel phones appear within words drawn from the lexicon. Each such lexeme is represented as a frame plus a list of vowel categories vℓ. Lexeme assignments for each token are drawn from a DP with a lexicon-generating base distribution HL. The category for each vowel token in the word is determined by the lexeme; the formant values are drawn from the corresponding Gaussian as in the IGMM: GL ∼DP(αl, HL) (5) xi = ℓ∼GL (6) wij|vℓj = c ∼N(µc, Σc) (7) HL generates lexemes by first drawing the number of phones from a geometric distribution and the number of consonant phones from a binomial distribution. The consonants are then generated from a DP with a uniform base distribution (but note they are fixed at inference time, i.e., are observed categorically), while the vowel phones vℓare generated by the IGMM DP above, vℓj ∼GC. Note that two draws from HL may result in identical lexemes; these are nonetheless considered to be separate (homophone) lexemes. 4 Topic-Lexical-Distributional Model The TLD model retains the IGMM vowel phone component, but extends the lexicon of the LD model by adding topic-specific lexicons, which capture the notion that lexeme probabilities are topicdependent. Specifically, the TLD model replaces 5This compound distribution is equivalent to Σc ∼IW(Σ0, ν0), µc|Σc ∼N(µ0, Σc ν0 ) 1076 the Dirichlet Process lexicon with a Hierarchical Dirichlet Process (HDP; Teh (2006)). In the HDP lexicon, a top-level global lexicon is generated as in the LD model. Topic-specific lexicons are then drawn from the global lexicon, containing a subset of the global lexicon (but since the size of the global lexicon is unbounded, so are the topic-specific lexicons). These topic-specific lexicons are used to generate the tokens in a similar manner to the LD model. There are a fixed number of lower level topic-lexicons; these are matched to the number of topics in the LDA model used to infer the topic distributions (see Section 6.4). More formally, the global lexicon is generated as a top-level DP: GL ∼DP(αl, HL) (see Section 3.2; remember HL includes draws from the IGMM over vowel categories). GL is in turn used as the base distribution in the topic-level DPs, Gk ∼DP(αk, GL). In the Chinese Restaurant Franchise metaphor often used to describe HDPs, GL is a global menu of dishes (lexemes). The topicspecific lexicons are restaurants, each with its own distribution over dishes; this distribution is defined by seating customers (word tokens) at tables, each of which serves a single dish from the menu: all tokens x at the same table t are assigned to the same lexeme ℓt. Inference (Section 5) is defined in terms of tables rather than lexemes; if multiple tables draw the same dish from GL, tokens at these tables share a lexeme. In the TLD model, tokens appear within situations, each of which has a distribution over topics θh. Each token xhi has a co-indexed topic assignment variable, zhi, drawn from θh, designating the topic-lexicon from which the table for xhi is to be drawn. The formant values for whij are drawn in the same way as in the LD model, given the lexeme assignment at xhi. This results in the following model, shown in Figure 2: GL ∼DP(αl, HL) (8) Gk ∼DP(αk, GL) (9) zhi ∼Mult(θh) (10) xhi = t|zhi = k ∼Gk (11) whij|xhi = t, vℓtj = c ∼N(µc, Σc) (12) 5 Inference: Gibbs Sampling We use Gibbs sampling to infer three sets of variables in the TLD model: assignments to vowel categories in the lexemes, assignments of tokens to µ0, κ0, Σ0, ν0 HC GC αc µc, Σc ∞ λ HL GL αl Gk αk K zhi xhi fhi whij |whi| |xh| D θh Figure 2: TLD model, depicting, from left to right, the IGMM component, the LD lexicon component, the topic-specific lexicons, and finally the token xhi, appearing in document h, with observed vowel formants whij and frame fhi. The lexeme assignment xhi and the topic assignment zhi are inferred, the latter using the observed documenttopic distribution θh. Note that fi is deterministic given the lexeme assignment. Squared nodes depict hyperparameters. λ is the set of hyperparameters used by HL when generating lexical items (see Section 3.2). topics, and assignments of tokens to tables (from which the assignment to lexemes can be read off). 5.1 Sampling lexeme vowel categories Each vowel in the lexicon must be assigned to a category in the IGMM. The posterior probability of a category assignment is composed of the DP prior over categories and the likelihood of the observed vowels belonging to that category. We use wℓj to denote the set of vowel formants at position j in words that have been assigned to lexeme ℓ. Then, P(vℓj = c|w, x, ℓ\ℓ) ∝P(vℓj = c|ℓ\ℓ)p(wℓj|vℓj = c, w\ℓj) (13) The first (DP prior) factor is defined as: P(vℓj = c|v\ℓj) = ( nc P c nc+αc if c exists αc P c nc+αc if c new (14) where nc is the number of other vowels in the lexicon, v\lj, assigned to category c. Note that there is always positive probability of creating a new category. The likelihood of the vowels is calculated by marginalizing over all possible means and variances of the Gaussian category parameters, given 1077 the NIW prior. For a single point (if |wℓj| = 1), this predictive posterior is in the form of a Student-t distribution; for the more general case see Feldman et al. (2013a), Eq. B3. 5.2 Sampling table & topic assignments We jointly sample x and z, the variables assigning tokens to tables and topics. Resampling the table assignment includes the possibility of changing to a table with a different lexeme or drawing a new table with a previously seen or novel lexeme. The joint conditional probability of a table and topic assignment, given all other current token assignments, is: P(xhi = t, zhi = k|whi, θh, t\hi, ℓ, w\hi) = P(k|θh)P(t|k, ℓt, t\hi) Y c∈C p(whi·|vℓt· = c, w\hi) (15) The first factor, the prior probability of topic k in document h, is given by θhk obtained from the LDA. The second factor is the prior probability of assigning word xi to table t with lexeme ℓgiven topic k. It is given by the HDP, and depends on whether the table t exists in the HDP topic-lexicon for k and, likewise, whether any table in the topiclexicon has the lexeme ℓ: P(t|k, ℓ, t\hi) ∝      nkt nk+αk if t in k αk nk+αk mℓ m+αl if t new, ℓknown αk nk+αk αℓ m+αl if t and ℓnew (16) Here nkt is the number of other tokens at table t, nk are the total number of tokens in topic k, mℓ is the number of tables across all topics with the lexeme ℓ, and m is the total number of tables. The third factor, the likelihood of the vowel formants whi in the categories given by the lexeme vl, is of the same form as the likelihood of vowel categories when resampling lexeme vowel assignments. However, here it is calculated over the set of vowels in the token assigned to each vowel category (i.e., the vowels at indices where vℓt· = c). For a new lexeme, we approximate the likelihood using 100 samples drawn from the prior, each weighted by α/100 (Neal, 2000). 5.3 Hyperparameters The three hyperparameters governing the HDP over the lexicon, αl and αk, and the DP over vowel categories, αc, are estimated using a slice sampler. The remaining hyperparameters for the vowel category and lexeme priors are set to the same values used by Feldman et al. (2013a). 6 Experiments 6.1 Corpus We test our model on situated child directed speech, taken from the C1 section of the Brent corpus in CHILDES (Brent and Siskind, 2001; MacWhinney, 2000). This corpus consists of transcripts of speech directed at infants between the ages of 9 and 15 months, captured in a naturalistic setting as parent and child went about their day. This ensures variability of situations. Utterances with unintelligible words or quotes are removed. We restrict the corpus to content words by retaining only words tagged as adj, n, part and v (adjectives, nouns, particles, and verbs). This is in line with evidence that infants distinguish content and function words on the basis of acoustic signals (Shi and Werker, 2003). Vowel categorization improves when attending only to more prosodically and phonologically salient tokens (Adriaans and Swingley, 2012), which generally appear within content, not function words. The final corpus consists of 13138 tokens and 1497 word types. 6.2 Hillenbrand Vowels The transcripts do not include phonetic information, so, following Feldman et al. (2013a), we synthesize the formant values using data from Hillenbrand et al. (1995). This dataset consists of a set of 1669 manually gathered formant values from 139 American English speakers (men, women and children) for 12 vowels. For each vowel category, we construct a Gaussian from the mean and covariance of the datapoints belonging to that category, using the first and second formant values measured at steady state. We also construct a second dataset using only datapoints from adult female speakers. Each word in the dataset is converted to a phonemic representation using the CMU pronunciation dictionary, which returns a sequence of Arpabet phoneme symbols. If there are multiple possible pronunciations, the first one is used. Each vowel phoneme in the word is then replaced by formant values drawn from the corresponding Hillenbrand Gaussian for that vowel. 1078 6.3 Merging Consonant Categories The Arpabet encoding used in the phonemic representation includes 24 consonants. We construct datasets both using the full set of consonants—the ‘C24’ dataset—and with less fine-grained consonant categories. Distinguishing all consonant categories assumes perfect learning of consonants prior to vowel categorization and is thus somewhat unrealistic (Polka and Werker, 1994), but provides an upper limit on the information that word-contexts can give. In the ‘C15’ dataset, the voicing distinction is collapsed, leaving 15 consonant categories. The collapsed categories are B/P, G/K, D/T, CH/JH, V/F, TH/DH, S/Z, SH/ZH, R/L while HH, M, NG, N, W, Y remain separate phonemes. This dataset mirrors the finding in Mani and Plunkett (2010) that 12 month old infants are not sensitive to voicing mispronunciations. The ‘C6’ dataset distinguishes between only 6 coarse consonant phonemes, corresponding to stops (B,P,G,K,D,T), affricates (CH,JH), fricatives (V, F, TH, DH, S, Z, SH, ZH, HH), nasals (M, NG, N), liquids (R, L), and semivowels/glides (W, Y). This dataset makes minimal assumptions about the category categories that infants could use in this learning setting. Decreasing the number of consonants increases the ambiguity in the corpus: bat not only shares a frame (b t) with boat and bite, but also, in the C15 dataset, with put, pad and bad (b/p d/t), and in the C6 dataset, with dog and kite, among many others (STOP STOP). Table 1 shows the percentage of types and tokens that are ambiguous in each dataset, that is, words in frames that match multiple wordtypes. Note that we always evaluate against the gold word identities, even when these are not distinguished in the model’s input. These datasets are intended to evaluate the degree of reliance on consonant information in the LD and TLD models, and to what extent the topics in the TLD model can replace this information. 6.4 Topics The input to the TLD model includes a distribution over topics for each situation, which we infer in advance from the full Brent corpus (not only the C1 subset) using LDA. Each transcript in the Brent corpus captures about 75 minutes of parent-child interaction, and thus multiple situations will be included in each file. The transcripts do not delimit Dataset C24 C15 C6 Input Types 1487 1426 1203 Frames 1259 1078 702 Ambig Types % 27.2 42.0 80.4 Ambig Tokens % 41.3 56.9 77.2 Table 1: Corpus statistics showing the increasing amount of ambiguity as consonant categories are merged. Input types are the number of word types with distinct input representations (as opposed to gold orthographic word types, of which there are 1497). Ambiguous types and tokens are those with frames that match multiple (orthographic) word types. situations, so we do this somewhat arbitrarily by splitting each transcript after 50 CDS utterances, resulting in 203 situations for the Brent C1 dataset. As well as function words, we also remove the five most frequent content words (be, go, get, want, come). On average, situations are only 59 words long, reflecting the relative lack of content words in CDS utterances. We infer 50 topics for this set of situations using the mallet toolkit (McCallum, 2002). Hyperparameters are inferred, which leads to a dominant topic that includes mainly light verbs (have, let, see, do). The other topics are less frequent but capture stronger semantic meaning (e.g. yummy, peach, cookie, daddy, bib in one topic, shoe, let, put, hat, pants in another). The word-topic assignments are used to calculate unsmoothed situation-topic distributions θ used by the TLD model. 6.5 Evaluation We evaluate against adult categories, i.e., the ‘goldstandard’, since all learners of a language eventually converge on similar categories. (Since our model is not a model of the learning process, we do not compare the infant learning process to the learning algorithm.) We evaluate both the inferred phonetic categories and words using the clustering evaluation measure V-Measure (VM; Rosenberg and Hirschberg, 2007).6 VM is the harmonic mean of two components, similar to F-score, where the components (VC and VH) are measures of cross entropy between the gold and model categorization. 6Other clustering measures, such as 1-1 matching and pairwise precision and recall (accuracy and completeness) showed the same trends, but VM has been demonstrated to be the most stable measure when comparing solutions with varying numbers of clusters (Christodoulopoulos et al., 2010). 1079 24 Cons 15 Cons 6 Cons 75 80 85 90 Dataset VM LD-all TLD-all LD-w TLD-w Figure 3: Vowel evaluation. ‘all’ refers to datasets with vowels synthesized from all speakers, ‘w’ to datasets with vowels synthesized from adult female speakers’ vowels. The bars show a 95% Confidence Interval based on 5 runs. IGMM-all results in a VM score of 53.9 (CI=0.5); IGMM-w has a VM score of 65.0 (CI=0.2), not shown. For vowels, VM measures how well the inferred phonetic categorizations match the gold categories; for lexemes, it measures whether tokens have been assigned to the same lexemes both by the model and the gold standard. Words are evaluated against gold orthography, so homophones, e.g. hole and whole, are distinct gold words. 6.6 Results We compare all three models—TLD, LD, and IGMM—on the vowel categorization task, and TLD and LD on the lexical categorization task (since IGMM does not infer a lexicon). The datasets correspond to two sets of conditions: firstly, either using vowel categories synthesized from all speakers or only adult female speakers, and secondly, varying the coarseness of the observed consonant categories. Each condition (model, vowel speakers, consonant set) is run five times, using 1500 iterations of Gibbs sampling with hyperparameter sampling. Overall, we find that TLD outperforms the other models in both tasks, across all conditions. Vowel categorization results are shown in Figure 3. IGMM performs substantially worse than both TLD and LD, with scores more than 30 points lower than the best results for these models, clearly showing the value of the protolexicon and repli500 1000 1500 2000 2500 3000 3500 F2 200 400 600 800 1000 1200 F1 Figure 4: Vowels found by the TLD model; supervowels are indicated in red. The gold-standard vowels are shown in gold in the background but are mostly overlapped by the inferred categories. cating the results found by Feldman et al. (2013a) on this dataset. Furthermore, TLD consistently outperforms the LD model, finding better phonetic categories, both for vowels generated from the combined categories of all speakers (‘all’) and vowels generated from adult female speakers only (‘w’), although the latter are clearly much easier for both models to learn. Both models perform less well when the consonant frames provide less information, but the TLD model performance degrades less than the LD performance. Both the TLD and the LD models find ‘supervowel’ categories, which cover multiple vowel categories and are used to merge minimal pairs into a single lexical item. Figure 4 shows example vowel categories inferred by the TLD model, including two supervowels. The TLD supervowels are used much less frequently than the supervowels found by the LD model, containing, on average, only twothirds as many tokens. Figure 5 shows that TLD also outperforms LD on the lexeme/word categorization task. Again performance decreases as the consonant categories become coarser, but the additional semantic information in the TLD model compensates for the lack of consonant information. In the individual components of VM, TLD and LD have similar VC (“recall”), but TLD has higher VH (“precision”), demonstrating that the semantic information given by the topics can separate potentially ambiguous words, as hypothesized. Overall, the contextual semantic information 1080 24 Cons 15 Cons 6 Cons 92 94 96 98 100 Dataset VM LD-all TLD-all LD-w TLD-w Figure 5: Lexeme evaluation. ‘all’ refers to datasets with vowels synthesized from all speakers, ‘w’ to datasets with vowels synthesized from adult female speakers’ vowels. added in the TLD model leads to both better phonetic categorization and to a better protolexicon, especially when the input is noisier, using degraded consonants. Since infants are not likely to have perfect knowledge of phonetic categories at this stage, semantic information is a potentially rich source of information that could be drawn upon to offset noise from other domains. The form of the semantic information added in the TLD model is itself quite weak, so the improvements shown here are in line with what infant learners could achieve. 7 Conclusion Language acquisition is a complex task, in which many heterogeneous sources of information may be useful. In this paper, we investigated whether contextual semantic information could be of help when learning phonetic categories. We found that this contextual information can improve phonetic learning performance considerably, especially in situations where there is a high degree of phonetic ambiguity in the word-forms that learners hear. This suggests that previous models that have ignored semantic information may have underestimated the information that is available to infants. Our model illustrates one way in which language learners might harness the rich information that is present in the world without first needing to acquire a full inventory of word meanings. The contextual semantic information that the TLD model tracks is similar to that potentially used in other linguistic learning tasks. Theories of cross-situational word learning (Smith and Yu, 2008; Yu and Smith, 2007) assume that sensitivity to situational co-occurrences between words and non-linguistic contexts is a precursor to learning the meanings of individual words. Under this view, contextual semantics is available to infants well before they have acquired large numbers of semantic minimal pairs. However, recent experimental evidence indicates that learners do not always retain detailed information about the referents that are present in a scene when they hear a word (Medina et al., 2011; Trueswell et al., 2013). This evidence poses a direct challenge to theories of cross-situational word learning. Our account does not necessarily require learners to track co-occurrences between words and individual objects, but instead focuses on more abstract information about salient events and topics in the environment; it will be important to investigate to what extent infants encode this information and use it in phonetic learning. Regardless of the specific way in which infants encode semantic information, our method of adding this information by using LDA topics from transcript data was shown to be effective. This method is practical because it can approximate semantic information without relying on extensive manual annotation. The LD model extended the phonetic categorization task by adding word contexts; the TLD model presented here goes even further, adding larger situational contexts. Both forms of top-down information help the low-level task of classifying acoustic signals into phonetic categories, furthering a holistic view of language learning with interaction across multiple levels. Acknowledgments This work was supported by EPSRC grant EP/H050442/1 and a James S. McDonnell Foundation Scholar Award to the final author. References Frans Adriaans and Daniel Swingley. Distributional learning of vowel categories is supported by prosody in infant-directed speech. In Proceedings of the 34th Annual Conference of the Cognitive Science Society (CogSci), 2012. E. Bergelson and D. Swingley. At 6-9 months, human infants know the meanings of many 1081 common nouns. Proceedings of the National Academy of Sciences, 109(9):3253–3258, Feb 2012. David M. Blei, Thomas L. Griffiths, Michael I. Jordan, and Joshua B. Tenenbaum. Hierarchical topic models and the nested Chinese restaurant process. In Advances in Neural Information Processing Systems 16, 2003. Michael R. Brent and Jeffrey M. Siskind. The role of exposure to isolated words in early vocabulary development. Cognition, 81(2):B33–B44, 2001. Christos Christodoulopoulos, Sharon Goldwater, and Mark Steedman. Two decades of unsupervised POS induction: How far have we come? In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL), pages 575–584, Cambridge, MA, October 2010. Association for Computational Linguistics. Bart de Boer and Patricia K. Kuhl. Investigating the role of infant-directed speech with a computer model. Acoustics Research Letters Online, 4(4): 129, 2003. Brian Dillon, Ewan Dunbar, and William Idsardi. A single-stage approach to learning phonological categories: Insights from Inuktitut. Cognitive Science, 37(2):344–377, Mar 2013. Micha Elsner, Sharon Goldwater, Naomi Feldman, and Frank Wood. A cognitive model of early lexical acquisition with phonetic variability. In Proceedings of the 18th Conference on Empirical Methods in Natural Language Processing (EMNLP), 2013. Naomi H. Feldman, Thomas L. Griffiths, Sharon Goldwater, and James L. Morgan. A role for the developing lexicon in phonetic category acquisition. Psychological Review, 2013a. Naomi H. Feldman, Emily B. Myers, Katherine S. White, Thomas L. Griffiths, and James L. Morgan. Word-level information influences phonetic learning in adults and infants. Cognition, 127(3): 427–438, 2013b. Abdellah Fourtassi and Emmanuel Dupoux. A rudimentary lexicon and semantics help bootstrap phoneme acquisition. Submitted. Michael C. Frank, Noah D. Goodman, and Joshua B. Tenenbaum. Using speakers’ referential intentions to model early cross-situational word learning. Psychological Science, 20(5): 578–585, 2009. Manuela Friedrich and Angela D. Friederici. Word learning in 6-month-olds: Fast encoding—weak retention. Journal of Cognitive Neuroscience, 23 (11):3228–3240, Nov 2011. Lakshmi J. Gogate and Lorraine E. Bahrick. Intersensory redundancy and 7-month-old infants’ memory for arbitrary syllable-object relations. Infancy, 2(2):219–231, Apr 2001. J. Hillenbrand, L. A. Getty, M. J. Clark, and K. Wheeler. Acoustic characteristics of American English vowels. Journal of the Acoustical Society of America, 97(5 Pt 1):3099–3111, May 1995. P. W. Jusczyk and Elizabeth A. Hohne. Infants’ memory for spoken words. Science, 277(5334): 1984–1986, Sep 1997. Patricia K. Kuhl, Karen A. Williams, Francisco Lacerda, Kenneth N. Stevens, and Bjorn Lindblom. Linguistic experience alters phonetic perception in infants by 6 months of age. Science, 255(5044):606–608, 1992. Brian MacWhinney. The CHILDES Project: Tools for Analyzing Talk. Lawrence Erlbaum Associates, 2000. D. R. Mandel, P. W. Jusczyk, and D. B. Pisoni. Infants’ recognition of the sound patterns of their own names. Psychological Science, 6(5):314– 317, Sep 1995. Nivedita Mani and Kim Plunkett. Twelve-montholds know their cups from their keps and tups. Infancy, 15(5):445470, Sep 2010. Jessica Maye, Daniel J. Weiss, and Richard N. Aslin. Statistical phonetic learning in infants: facilitation and feature generalization. Developmental Science, 11(1):122–134, Jan 2008. Jessica Maye, Janet F Werker, and LouAnn Gerken. Infant sensitivity to distributional information can affect phonetic discrimination. Cognition, 82(3):B101–B111, Jan 2002. Andrew McCallum. MALLET: A machine learning for language toolkit, 2002. Bob McMurray, Richard N. Aslin, and Joseph C. Toscano. Statistical learning of phonetic categories: insights from a computational approach. Developmental Science, 12(3):369–378, May 2009. 1082 Tamara Nicol Medina, Jesse Snedeker, John C. Trueswell, and Lila R. Gleitman. How words can and cannot be learned by observation. Proceedings of the National Academy of Sciences, 108(22):9014–9019, 2011. Radford Neal. Markov chain sampling methods for Dirichlet process mixture models. Journal of Computational and Graphical Statistics, 9: 249–265, 2000. Linda Polka and Janet F. Werker. Developmental changes in perception of nonnative vowel contrasts. Journal of Experimental Psychology: Human Perception and Performance, 20(2):421– 435, 1994. Carl Rasmussen. The infinite Gaussian mixture model. In Advances in Neural Information Processing Systems 13, 2000. Andrew Rosenberg and Julia Hirschberg. Vmeasure: A conditional entropy-based external cluster evaluation measure. In Proceedings of the 12th Conference on Empirical Methods in Natural Language Processing (EMNLP), 2007. Brandon C. Roy, Michael C. Frank, and Deb Roy. Relating activity contexts to early word learning in dense longitudinal data. In Proceedings of the 34th Annual Conference of the Cognitive Science Society (CogSci), 2012. Rushen Shi and Janet F. Werker. The basis of preference for lexical words in 6-month-old infants. Developmental Science, 6(5):484–488, 2003. M. Shukla, K. S. White, and R. N. Aslin. Prosody guides the rapid mapping of auditory word forms onto visual objects in 6-mo-old infants. Proceedings of the National Academy of Sciences, 108 (15):6038–6043, Apr 2011. Linda B. Smith and Chen Yu. Infants rapidly learn word-referent mappings via cross-situational statistics. Cognition, 106(3):1558–1568, 2008. Christine L. Stager and Janet F. Werker. Infants listen for more phonetic detail in speech perception than in word-learning tasks. Nature, 388: 381–382, 1997. D. Swingley. Contributions of infant word learning to language development. Philosophical Transactions of the Royal Society B: Biological Sciences, 364(1536):3617–3632, Nov 2009. Yee Whye Teh. A hierarchical Bayesian language model based on Pitman-Yor processes. In Proceedings of the 44th Annual Meeting of the Association for Computational Linguistics (ACL), pages 985 – 992, Sydney, 2006. Tuomas Teinonen, Richard N. Aslin, Paavo Alku, and Gergely Csibra. Visual speech contributes to phonetic learning in 6-month-old infants. Cognition, 108:850–855, 2008. Erik D. Thiessen. The effect of distributional information on children’s use of phonemic contrasts. Journal of Memory and Language, 56(1):16–34, Jan 2007. R. Tincoff and P. W. Jusczyk. Some beginnings of word comprehension in 6-month-olds. Psychological Science, 10(2):172–175, Mar 1999. Ruth Tincoff and Peter W. Jusczyk. Six-montholds comprehend words that refer to parts of the body. Infancy, 17(4):432444, Jul 2012. N. S. Trubetzkoy. Grundz¨uge der Phonologie. Vandenhoeck und Ruprecht, G¨ottingen, 1939. John C. Trueswell, Tamara Nicol Medina, Alon Hafri, and Lila R. Gleitman. Propose but verify: Fast mapping meets cross-situational word learning. Cognitive Psychology, 66:126–156, 2013. G. K. Vallabha, J. L. McClelland, F. Pons, J. F. Werker, and S. Amano. Unsupervised learning of vowel categories from infant-directed speech. Proceedings of the National Academy of Sciences, 104(33):13273–13278, Aug 2007. Janet F. Werker and Richard C. Tees. Crosslanguage speech perception: Evidence for perceptual reorganization during the first year of life. Infant Behavior and Development, 7:49–63, 1984. H. Henny Yeung and Janet F. Werker. Learning words’ sounds before learning how words sound: 9-month-olds use distinct objects as cues to categorize speech information. Cognition, 113(2): 234–243, Nov 2009. Chen Yu and Linda B. Smith. Rapid word learning under uncertainty via cross-situational statistics. Psychological Science, 18(5):414–420, 2007. 1083
2014
101
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1084–1093, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Bootstrapping into Filler-Gap: An Acquisition Story Marten van Schijndel Micha Elsner The Ohio State University {vanschm,melsner}@ling.ohio-state.edu Abstract Analyses of filler-gap dependencies usually involve complex syntactic rules or heuristics; however recent results suggest that filler-gap comprehension begins earlier than seemingly simpler constructions such as ditransitives or passives. Therefore, this work models filler-gap acquisition as a byproduct of learning word orderings (e.g. SVO vs OSV), which must be done at a very young age anyway in order to extract meaning from language. Specifically, this model, trained on part-of-speech tags, represents the preferred locations of semantic roles relative to a verb as Gaussian mixtures over real numbers. This approach learns role assignment in filler-gap constructions in a manner consistent with current developmental findings and is extremely robust to initialization variance. Additionally, this model is shown to be able to account for a characteristic error made by learners during this period (A and B gorped interpreted as A gorped B). 1 Introduction The phenomenon of filler-gap, where the argument of a predicate appears outside its canonical position in the phrase structure (e.g. [the apple]i that the boy ate ti or [what]i did the boy eat ti), has long been an object of study for syntacticians (Ross, 1967) due to its apparent processing complexity. Such complexity is due, in part, to the arbitrary length of the dependency between a filler and its gap (e.g. [the apple]i that Mary said the boy ate ti). Recent studies indicate that comprehension of filler-gap constructions begins around 15 months (Seidl et al., 2003; Gagliardi et al., 2014). This finding raises the question of how such a complex phenomenon could be acquired so early since children at that age do not yet have a very advanced grasp of language (e.g. ditransitives do not seem to be generalized until at least 31 months; Goldberg et al. 2004, Bello 2012). This work shows that filler-gap comprehension in English may be Age Wh-S Wh-O 1-1 13mo No No 15mo Yes (Yes) 20mo Yes Yes Yes 25mo Yes Yes No Figure 1: The developmental timeline of subject (Wh-S) and object (Wh-O) wh-clause extraction comprehension suggested by experimental results (Seidl et al., 2003; Gagliardi et al., 2014). Parentheses indicate weak comprehension. The final row shows the timeline of 1-1 role bias errors (Naigles, 1990; Gertner and Fisher, 2012). Missing nodes denote a lack of studies. acquired through learning word orderings rather than relying on hierarchical syntactic knowledge. This work describes a cognitive model of the developmental timecourse of filler-gap comprehension with the goal of setting a lower bound on the modeling assumptions necessary for an ideal learner to display filler-gap comprehension. In particular, the model described in this paper takes chunked child-directed speech as input and learns orderings over semantic roles. These orderings then permit the model to successfully resolve filler-gap dependencies.1 Further, the model presented here is also shown to initially reflect an idiosyncratic role assignment error observed in development (e.g. A and B kradded interpreted as A kradded B; Gertner and Fisher, 2012), though after training, the model is able to avoid the error. As such, this work may be said to model a learner from 15 months to between 25 and 30 months. 1This model does not explicitly learn gap positions, but rather assigns thematic roles to arguments based on where those arguments are expected to manifest. This approach to filler-gap comprehension is supported by findings that show people do not actually link fillers to gap positions but instead link the filler to a verb with missing arguments (Pickering and Barry, 1991) 1084 2 Background The developmental timeline during which children acquire the ability to process filler-gap constructions is not well-understood. Language comprehension precedes production, and the developmental literature on the acquisition of filler-gap constructions is sparsely populated due to difficulties in designing experiments to test filler-gap comprehension in preverbal infants. Older studies typically looked at verbal children and the mistakes they make to gain insight into the acquisition process (de Villiers and Roeper, 1995). Recent studies, however, indicate that fillergap comprehension likely begins earlier than production (Seidl et al., 2003; Gagliardi and Lidz, 2010; Gagliardi et al., 2014). Therefore, studies of verbal children are probably actually testing the acquisition of production mechanisms (planning, motor skills, greater facility with lexical access, etc) rather than the acquisition of fillergap. Note that these may be related since fillergap could introduce greater processing load which could overwhelm the child’s fragile production capacity (Phillips, 2010). Seidl et al. (2003) showed that children are able to process wh-extractions from subject position (e.g. [who]i ti ate pie) as young as 15 months while similar extractions from object position (e.g. [what]i did the boy eat ti) remain unparseable until around 20 months of age.2 This line of investigation has been reopened and expanded by Gagliardi et al. (2014) whose results suggest that the experimental methodology employed by Seidl et al. (2003) was flawed in that it presumed infants have ideal performance mechanisms. By providing more trials of each condition and controlling for the pragmatic felicity of test statements, Gagliardi et al. (2014) provide evidence that 15-month old infants can process wh-extractions from both subject and object positions. Object extractions are more difficult to comprehend than subject extractions, however, perhaps due to additional processing load in object extractions (Gibson, 1998; Phillips, 2010). Similarly, Gagliardi and Lidz (2010) show that relativized extractions with a wh-relativizer (e.g. find [the boy]i who ti ate the apple) are easier to comprehend than relativized extractions with that as the relativizer (e.g. find [the boy]i that ti ate the apple). Yuan et al. (2012) demonstrate that 19-month olds use their knowledge of nouns to learn both verbs and their associated argument structure. In 2Since the wh-phrase is in the same (or a very similar) position as the original subject when the wh-phrase takes subject position, it is not clear that these constructions are true extractions (Culicover, 2013), however, this paper will continue to refer to them as such for ease of exposition. their study, infants were shown video of a person talking on a phone using a nonce verb with either one or two nouns (e.g. Mary kradded Susan). Under the assumption that infants look longer at things that correspond to their understanding of a prompt, the infants were then shown two images that potentially depicted the described action – one picture where two actors acted independently (reflecting an intransitive proposition) and one picture where one actor acted on the other (reflecting a transitive proposition).3 Even though the infants had no extralinguistic knowledge about the verb, they consistently treated the verb as transitive if two nouns were present and intransitive if only one noun was present. Similarly, Gertner and Fisher (2012) show that intransitive phrases with conjoined subjects (e.g. John and Mary gorped) are given a transitive interpretation (i.e. John gorped Mary) at 21 months (henceforth termed ‘1-1 role bias’), though this effect is no longer present at 25 months (Naigles, 1990). This finding suggests both that learners will ignore canonical structure in favor of using all possible arguments and that children have a bias to assign a unique semantic role to each argument. It is important to note, however, that crosslinguistically children do not seem to generalize beyond two arguments until after at least 31 months of age (Goldberg et al., 2004; Bello, 2012), so a predicate occurring with three nouns would still likely be interpreted as merely transitive rather than ditransitive. Computational modeling provides a way to test the computational level of processing (Marr, 1982). That is, given the input (child-directed speech, adult-directed speech, and environmental experiences), it is possible to probe the computational processes that result in the observed output. However, previous computational models of grammar induction (Klein and Manning, 2004), including infant grammar induction (Kwiatkowski et al., 2012), have not addressed filler-gap comprehension.4 The closest work to that presented here is the work on BabySRL (Connor et al., 2008; Connor et al., 2009; Connor et al., 2010). BabySRL is a computational model of semantic role acquistion using a similar set of assumptions to the current work. BabySRL learns weights over ordering constraints (e.g. preverbal, second noun, etc.) to acquire semantic role labelling while still exhibiting 1-1 role bias. However, no analysis has evaluated the abil3There were two actors in each image to avoid biasing the infants to look at the image with more actors. 4As one reviewer notes, Joshi et al. (1990) and subsequent work show that filler-gap phenomena can be formally captured by mildly context-sensitive grammar formalisms; these have the virtue of scaling up to adult grammar, but due to their complexity, do not seem to have been described as models of early acquisition. 1085 Susan said John gave girl book -3 -2 -1 0 1 2 Table 1: An example of a chunked sentence (Susan said John gave the girl a red book) with the sentence positions labelled. Nominal heads of noun chunks are in bold. ity of BabySRL to acquire filler-gap constructions. Further comparison to BabySRL may be found in Section 6. 3 Assumptions The present work restricts itself to acquiring fillergap comprehension in English. The model presented here learns a single, non-recursive ordering for the semantic roles in each sentence relative to the verb since several studies have suggested that early child grammars may consist of simple linear grammars that are dictated by semantic roles (Diessel and Tomasello, 2001; Jackendoffand Wittenberg, in press). This work assumes learners can already identify nouns and verbs, which is supported by Shi et al. (1999) who show that children at an extremely young age can distinguish between content and function words and by Waxman and Booth (2001) who show that children can distinguish between different types of content words. Further, since Waxman and Booth (2001) demonstrate that, by 14 months, children are able to distinguish nouns from modifiers, this work assumes learners can already chunk nouns and access the nominal head. To handle recursion, this work assumes that children treat the final verb in each sentence as the main verb (implicitly assuming sentence segmentation), which ideally assigns roles to each of the nouns in the sentence. Due to the findings of Yuan et al. (2012), this work adopts a ‘syntactic bootstrapping’ theory of acquisition (Gleitman, 1990), where structural properties (e.g. number of nouns) inform the learner about semantic properties of a predicate (e.g. how many semantic roles it confers). Since infants infer the number of semantic roles, this work further assumes they already have expectations about where these roles tend to be realized in sentences, if they appear. These positions may correspond to different semantic roles for different predicates (e.g. the subject of run and of melt); however, the role for predicates with a single argument is usually assigned to the noun that precedes the verb while a second argument is usually assigned after the verb. The semantic properties of these roles may be learned lexically for each predicate, but that is beyond the scope of this work. Therefore, this work uses syntactic and semantic roles interchangeably (e.g. subject and agent). µ σ π GSC -1 0.5 .999 GSN -1 3 .001 GOC 1 0.5 .999 GON 1 3 .001 Φ .00001 Table 2: Initial values for the mean (µ), standard deviation (σ), and prior (π) of each Gaussian as well as the skip penalty (Φ) used in this paper. Finally, following the finding by Gertner and Fisher (2012) that children interpret intransitives with conjoined subjects as transitives, this work assumes that semantic roles have a one-to-one correspondence with nouns in a sentence (similarly used as a soft constraint in the semantic role labelling work of Titov and Klementiev, 2012). 4 Model The model represents the preferred locations of semantic roles relative to the verb as distributions over real numbers. This idea is adapted from Boersma (1997) who uses it to learn constraint rankings in optimality theory. In this work, the final (main) verb is placed at position 0; words (and chunks) before the verb are given progressively more negative positions, and words after the verb are given progressively more positive positions (see Table 1). Learner expectations of where an argument will appear relative to the verb are modelled as two-component Gaussian mixtures: one mixture of Gaussians (GS·) corresponds to the subject argument, another (GO·) corresponds to the object argument. There is no mixture for a third argument since children do not generalize beyond two arguments until later in development (Goldberg et al., 2004; Bello, 2012). One component of each mixture learns to represent the canonical position for the argument (G·C) while the other (G·N) represents some alternate, non-canonical position such as the filler position in filler-gap constructions. To reflect the fact that learners have had 15 months of exposure to their language before acquiring filler-gap, the mixture is initialized so that there is a stronger probability associated with the canonical Gaussian than with the non-canonical Gaussian of each mixture.5 Finally, the one-to-one role bias is explicitly encoded such that the model cannot use a label that has already been used elsewhere in the sentence. 5Akhtar (1999) finds that learners may not have strong expectations of canonical argument positions until four years of age, but the results of the current study are extremely robust to changes in initialization, as discussed in Section 7 of this paper, so this assumption is mostly adopted for ease of exposition. 1086 Probability Position relative to verb Probability Position relative to verb Figure 2: Visual representations of (Left) the initial model’s expectations of where arguments will appear, given the initial parameters in Table 2 and (Right) the converged model’s expectations of where arguments will appear. Thus, the initial model conditions (see Figure 2) are most likely to realize an SVO ordering, although it is possible to obtain SOV (by sampling a negative number from the blue curve) or even OSV (by also sampling the red curve very close to 0). The model is most likely to hypothesize a preverbal object when it has already assigned the subject role to something and, in addition, there is no postverbal noun competing for the object label. In other words, the model infers that an object extraction may have occurred if there is a ‘missing’ postverbal argument. Finally, the probability of a given sequence is the product of the label probabilities for the component argument positions (e.g. GSC generating an argument at position -2, etc). Since many sentences have more than two nouns, the model is allowed to skip nouns by multiplying a penalty term (Φ) into the product for each skipped noun; the cost is set at 0.00001 for this study, though see Section 7 for a discussion of the constraints on this parameter. See Table 2 for initialization parameters and Figure 2 for a visual representation of the initial expectations of the model. This work uses a model with 2-component mixtures for both subjects and objects (termed the symmetric model). This formulation achieves the best fit to the training data according to the Bayesian Information Criterion (BIC).6 However, follow-up experiments find that the non-canonical subject Gaussian only improves the likelihood of the data by erroneously modeling postverbal nouns in imperative statements. The lack of a canonical subject in English imperatives allows the model to improve the likelihood of the data by using the non-canonical subject Gaussian to capture ficti6The BIC rewards improved log-likelihood but penalizes increased model complexity. tious postverbal arguments. When imperatives are filtered out of the training corpus, the symmetric model obtains a worse BIC fit than a model that lacks the non-canonical subject Gaussian. Therefore, if one makes the assumption that imperatives are prosodically-marked for learners (e.g. the learner is the implicit subject), the best model is one that lacks a non-canonical subject.7 The remainder of this paper assumes a symmetric model to demonstrate what happens if such an assumption is not made; for the evaluations described in this paper, the results are similar in either case. This model differs from other non-recursive computational models of grammar induction (e.g. Goldwater and Griffiths, 2007) since it is not based on Hidden Markov Models. Instead, it determines the best ordering for the sentence as a whole. This approach bears some similarity to a Generalized Mallows model (Chen et al., 2009), but the current formulation was chosen due to being independently posited as cognitively plausible (Boersma, 1997). Figure 2 (Right) shows the converged, final state of the model. The model expects the first argument (usually agent) to be assigned preverbally and expects the second (say, patient) to be assigned postverbally; however, there is now a larger chance that the second argument will appear preverbally. 5 Evaluation The model in this work is trained using transcribed child-directed speech (CDS) from the BabySRL portions (Connor et al., 2008) of CHILDES (MacWhinney, 2000). Chunking is performed us7This finding suggests that a Dirichlet Process or other means of dynamically determining the number of components in each mixture would converge to a model that lacks non-canonical subjects if imperative filtering were employed. 1087 Eve (n = 4820) Adam (n = 4461) P R F P R F Initial .54 .64 .59 .53 .60 .56 Trained .52 .69 .59∗ .51 .65 .57∗ Initialc .56 .66 .60 .55 .62 .58 Trainedc .54 .71 .61∗ .53 .67 .59∗ Table 3: Overall accuracy on the Eve and Adam sections of the BabySRL corpus. Bottom rows reflect accuracy when non-agent roles are collapsed into a single role. Note that improvements are numerically slight since filler-gap is relatively rare (Schuler, 2011). ∗p << .01 ing a basic noun-chunker from NLTK (Bird et al., 2009). Based on an initial analysis of chunker performance, yes is hand-corrected to not be a noun. Poor chunker perfomance is likely due to a mismatch in chunker training and testing domains (Wall Street Journal text vs transcribed speech), but chunking noise may be a good estimation of learner uncertainty, so the remaining text is left uncorrected. All noun phrase chunks are then replaced with their final noun (presumed the head) to approximate the ability of children to distinguish nouns from modifiers (Waxman and Booth, 2001). Finally, for each sentence, the model assigns sentence positions to each word with the final verb at zero. Viterbi Expectation-Maximization is performed over each sentence in the corpus to infer the parameters of the model. During the Expectation step, the model uses the current Gaussian parameters to label the nouns in each sentence with argument roles. Since the model is not lexicalized, these roles correspond to the semantic roles most commonly associated with subject and object. The model then chooses the best label sequence for each sentence. These newly labelled sentences are used during the Maximization step to determine the Gaussian parameters that maximize the likelihood of that labelling. The mean of each Gaussian is updated to the mean position of the words it labels. Similarly, the standard deviation of each Gaussian is updated with the standard deviation of the positions it labels. A learning rate of 0.3 is used to prevent large parameter jumps. The prior probability of each Gaussian is updated as the ratio of that Gaussian’s labellings to the total number of labellings from that mixture in the corpus: πρθ = | Gρθ | | Gρ· | (1) where ρ ∈{S, O} and θ ∈{C, N}. Best results seem to be obtained when the skippenalty is loosened by an order of magnitude durSubject Extraction filter: S x V . . . Object Extraction filter: O . . . V . . . Eve (n = 1345) Adam (n = 1287) P R F P R F Initialc .53 .57 .55 .53 .52 .52 Trainedc .55 .67 .61∗ .54 .63 .58∗ Table 4: (Above) Filters to extract filler-gap constructions: A) the subject and verb are not adjacent, B) the object precedes the verb. (Below) Filler-gap accuracy on the Eve and Adam sections of the BabySRL corpus when non-agent roles are collapsed into a single role. ∗p << .01 ing testing. Essentially, this forces the model to tightly adhere to the perceived argument structure during training to learn more rigid parameters, but the model is allowed more leeway to skip arguments it has less confidence in during testing. Convergence (see Figure 2) tends to occur after four iterations but can take up to ten iterations depending on the initial parameters. Since the model is unsupervised, it is trained on a given corpus (e.g. Eve) before being tested on the role annotations of that same corpus. The Eve corpus was used for development purposes,8 and the Adam data was used only for testing. For testing, this study uses the semantic role annotations in the BabySRL corpus. These annotations were obtained by automatically semantic role labelling portions of CHILDES with the system of Punyakanok et al. (2008) before roughly hand-correcting them (Connor et al., 2008). The BabySRL corpus is annotated with 5 different roles, but the model described in this paper only uses 2 roles. Therefore, overall accuracy results (see Table 3) are presented both for the raw BabySRL corpus and for a collapsed BabySRL corpus where all non-agent roles are collapsed into a single role (denoted by a subscript c in all tables). Since children do not generalize above two arguments during the modelled age range (Goldberg et al., 2004; Bello, 2012), the collapsed numbers more closely reflect the performance of a learner at this age than the raw numbers. The increase in accuracy obtained from collapsing non-agent arguments indicates that children may initially generalize incorrectly to some verbs and would need to learn lexically-specific role assignments (e.g. double-object constructions of give). Since the current work is interested in general filler-gap comprehension at this age, including over unknown verbs, the remaining analyses in this paper con8This is included for transparency, though the initial parameters have very little bearing on the final results as stated in Section 7, so the danger of overfitting to development data is very slight. 1088 P R F P R F Eve Subj (n = 691) Obj (n = 654) Initialc .66 .83 .74 .35 .31 .33 Trainedc .64 .84 .72† .45 .52 .48∗ Adam Subj (n = 886) Obj (n = 1050) Initialc .69 .81 .74 .33 .27 .30 Trainedc .66 .81 .73 .44 .48 .46∗ P R F P R F Eve Wh- (n = 689) That (n = 125) Initialc .63 .45 .53 .43 .48 .45 Trainedc .73 .75 .74∗ .44 .57 .50† Adam Wh- (n = 748) That (n = 189) Initialc .50 .37 .42 .50 .50 .50 Trainedc .61 .65 .63∗ .47 .56 .51† Table 5: (Left) Subject-extraction accuracy and object-extraction accuracy and (Right) Wh-relative accuracy and that-relative accuracy; calculated over the Eve and Adam sections of the BabySRL corpus with non-agent roles collapsed into a single role. †p = .02 ∗p << .01 sider performance when non-agent arguments are collapsed.9 Next, a filler-gap version of the BabySRL corpus is created using a coarse filtering process: the new corpus is comprised of all sentences where an associated object precedes the final verb and all sentences where the relevant subject is not immediately followed by the final verb (see Table 4). For these filler-gap evaluations, the model is trained on the full version of the corpus in question (e.g. Eve) before being tested on the filler-gap subset of that corpus. The overall results of the filler-gap evaluation (see Table 4) indicate that the model improves significantly at parsing filler-gap constructions after training. The performance of the model on roleassignment in filler-gap constructions may be analyzed further in terms of how the model performs on subject-extractions compared with object-extractions and in terms of how the model performs on that-relatives compared with whrelatives (see Table 5). The model actually performs worse at subjectextractions after training than before training. This is unsurprising because, prior to training, subjects have little-to-no competition for preverbal role assignments; after training, there is a preverbal extracted object category, which the model can erroneously use. This slight, though significant in Eve, deficit is counter-balanced by a very substantial and significant improvement in objectextraction labelling accuracy. Similarly, training confers a large and significant improvement for role assignment in wh-relative constructions, but it yields less of an improvement for that-relative constructions. This difference mimics a finding observed in the developmental literature where children seem slower to acquire comprehension of that-relatives than of whrelatives (Gagliardi and Lidz, 2010). 9Though performance is slightly worse when arguments are not collapsed, all the same patterns emerge. 6 Comparison to BabySRL The acquisition of semantic role labelling (SRL) by the BabySRL model (Connor et al., 2008; Connor et al., 2009; Connor et al., 2010) bears many similarities to the current work and is, to our knowledge, the only comparable line of inquiry to the current one. The primary function of BabySRL is to model the acquisition of semantic role labelling while making an idiosyncratic error which infants also make (Gertner and Fisher, 2012), the 1-1 role bias error (John and Mary gorped interpreted as John gorped Mary). Similar to the model presented in this paper, BabySRL is based on simple ordering features such as argument position relative to the verb and argument position relative to the other arguments. This section will demonstrate that the model in this paper initially reflects 1-1 role bias comparably to BabySRL, though it progresses beyond this bias after training.10 Further, the model in this paper is able to reflect the concurrent acquisition of fillergap whereas BabySRL does not seem well-suited to such a task. Finally, BabySRL performs undesirably in intransitive settings whereas the model in this paper does not. Connor et al. (2008) demonstrate that a supervised perceptron classifier, based on positional features and trained on the silver role label annotations of the BabySRL corpus, manifests 1-1 role bias errors. Follow-up studies show that supervision may be lessened (Connor et al., 2009) or removed (Connor et al., 2010) and BabySRL will still reflect a substantial 1-1 role bias. Connor et al. (2008) and Connor et al. (2009) run direct analyses of how frequently their models make 1-1 role bias errors. A comparable evaluation may be run on the current model by generating 1000 sentences with a structure of NNV and reporting how many times the model chooses a subject-first labelling (see Table 6).11 10All evaluations in this section are preceded by training on the chunked Eve corpus. 11While Table 6 analyzes erroneous labellings of NNV structure, the ‘Obj’ column of Table 5 (Left) 1089 Error rate Initial .36 Trained .11 Initial (given 2 args) .66 Trained (given 2 args) .13 2008 arg-arg position .65 2008 arg-verb position 0 2009 arg-arg position .82 2009 arg-verb position .63 Table 6: 1-1 role bias error in this model compared to the models of Connor et al. (2008) and Connor et al. (2009). That is, how frequently each model labelled an NNV sentence SOV. Since the Connor et al. models are perceptron-based, they require both arguments be labelled. The model presented in this paper does not share this restriction, so the raw error rate for this model is presented in the first two lines; the error rate once this additional restriction is imposed is given in the second two lines. The results of Connor et al. (2008) and Connor et al. (2009) depend on whether BabySRL uses argument-argument relative position as a feature or argument-verb relative position as a feature (there is no combined model). Further, the model presented here from Connor et al. (2009) has a unique argument constraint, similar to the model in this paper, in order to make comparison as direct as possible. The 1-1 role bias error rate (before training) of the model presented in this paper is comparable to that of Connor et al. (2008) and Connor et al. (2009), which shows that the current model provides comparable developmental modeling benefits to the BabySRL models. Further, similar to real children (see Figure 1) the model presented in this paper develops beyond this error by the end of its training,12 whereas the BabySRL models still make this error after training. Connor et al. (2010) look at how frequently their model correctly labels the agent in transitive and intransitive sentences with unknown verbs (to demonstrate that it exhibits an agent-first bias). This evaluation can be replicated for the current study by generating 1,000 sentences with the transitive form of NVN and a further 1,000 sentences with the intransitive form of NV (see Table 7). Since Connor et al. (2010) investigate the effects shows model accuracy on NNV structures. 12It is important to note that the unique argument constraint prevents the current model from actually getting the correct, conjoined-subject parse, but it no longer exhibits agent-first bias, an important step for acquiring passives, which occurs between 3 and 4 years (Thatcher et al., 2008). NVN NV Sents in Eve 1173 1513 Sents in Adam 1029 1353 Initial .67 1 Trained .65 .96 Weak (10) lexical .71 .59 Strong (365) lexical .74 .41 Gold Args .77 .58 Table 7: Agent-prediction recall accuracy in transitive (NVN) and intransitive (NV) settings of the model presented in this paper (middle) and the combined model of Connor et al. (2010) (bottom), which has features for argument-argument relative position as well as argument-predicate relative position and so is closest to the model presented in this paper. of different initial lexicons, this evaluation compares against the resulting BabySRL from each initializer: they initially seed their part-of-speech tagger with either the 10 or 365 most frequent nouns in the corpus or they dispense with the tagger and use gold part-of-speech tags. As with subject extraction, the model in this paper gets less accurate after training because of the newly minted extracted object category that can be mistakenly used in these canonical settings. While the model of Connor et al. (2010) outperforms the model presented here when in a transitive setting, their model does much worse in an intransitive setting. The difference in transitive settings stems from increased lexicalization, as is apparent from their results alone; the model presented here initially performs close to their weakly lexicalized model, though training impedes agentprediction accuracy due to an increased probability of non-canonical objects. For the intransitive case, however, whereas the model presented in this paper is generally able to successfully label the lone noun as the subject, the model of Connor et al. (2010) chooses to label lone nouns as objects about 40% of the time. This likely stems from their model’s reliance on argumentargument relative position as a feature; when there is no additional argument to use for reference, the model’s accuracy decreases. This is borne out by their model (not shown in Table 7) that omits the argument-argument relative position feature and solely relies on verb-argument position, which achieves up to 70% accuracy in intransitive settings. Even in that case, however, BabySRL still chooses to label lone nouns as objects 30% of the time. The fact that intransitive sentences are more common than transitive sentences in both the Eve and Adam sections of the BabySRL corpus suggests that learners should be more likely to assign 1090 correct roles in an intransitive setting, which is not reflected in the BabySRL results. The overall reason for the different results between the current work and BabySRL is that BabySRL relies on positional features that measure the relative position of two individual elements (e.g. where a given noun is relative to the verb). Since the model in this paper operates over global orderings, it implicitly takes into account the positions of other nouns as it models argument position relative to the verb; object and subject are in competition as labels for preverbal nouns, so a preverbal object is usually only assigned once a subject has already been detected. Further, while BabySRL consistently reflects 11 role bias (corresponding to a pre 25-month old learner), it also learns to productively label five roles, which developmental studies have shown does not take place until at least 31 months (Goldberg et al., 2004; Bello, 2012). Finally, it does not seem likely that BabySRL could be easily extended to capture filler-gap acquisition. The argumentverb position features impede acquisition of fillergap by classifying preverbal arguments as agents, and the argument-argument position features inhibit accurate labelling in intransitive settings and result in an agent-first bias which would tend to label extracted objects as agents. In fact, these observations suggest that any linear classifier which relies on positioning features will have difficulties modeling filler-gap acquisition. In sum, the unlexicalized model presented in this paper is able to achieve greater labelling accuracy than the lexicalized BabySRL models in intransitive settings, though this model does perform slightly worse in the less common transitive setting. Further, the unsupervised model in this paper initially reflects developmental 1-1 role bias as well as the supervised BabySRL models, and it is able to progress beyond this bias. Finally, unlike BabySRL, the model presented here provides a cognitive model of the acquisition of filler-gap comprehension, which BabySRL does not seem wellsuited to model. 7 Discussion This paper has presented a simple cognitive model of filler-gap acquisition, which is able to capture several findings from developmental psychology. Training significantly improves role labelling in the case of object-extractions, which improves the overall accuracy of the model. This boost is accompanied by a slight decrease in labelling accuracy in subject-extraction settings. The asymmetric ease of subject versus object comprehension is well-documented in both children and adults (Gibson, 1998), and while training improves the model’s ability to process object-extractions, there is still a gap between object-extraction and subject-extraction comprehension even after training. Further, the model exhibits better comprehension of wh-relatives than that-relatives similar to children (Gagliardi and Lidz, 2010). This could also be an area where a lexicalized model could do better. As Gagliardi and Lidz (2010) point out, whereas wh-relatives such as who or which always signify a filler-gap construction, that can occur for many different reasons (demonstrative, determiner, complementizer, etc) and so is a much weaker filler-gap cue. A lexical model could potentially pick up on clues which could indicate when that is a relativizer or simply improve on its comprehension of wh-relatives even more. It is interesting to note that the cuurent model does not make use of that as a cue at all and yet is still slower at acquiring that-relatives than wh-relatives. This fact suggests that the findings of Gagliardi and Lidz (2010) may be partially explained by a frequency effect: perhaps the input to children is simply biased such that wh-relatives are much more common than that-relatives (as shown in Table 5). This model also initially reflects the 1-1 role bias observed in children (Gertner and Fisher, 2012) as well as previous models (Connor et al., 2008; Connor et al., 2009; Connor et al., 2010) without sacrificing accuracy in canonical intransitive settings. Finally, this model is extremely robust to different initializations. The canonical Gaussian expectations can begin far from the verb (±3) or close to the verb (±0.1), and the standard deviations of the distributions and the skip-penalty can vary widely; the model always converges to give comparable results to those presented here. The only constraint on the initial parameters is that the probability of the extracted object occurring preverbally must exceed the skip-penalty (i.e. extraction must be possible). In short, this paper describes a simple, robust cognitive model of the development of a learner between 15 months until somewhere between 25- and 30-months old (since 1-1 role bias is no longer present but no more than two arguments are being generalized). In future, it would be interesting to incorporate lexicalization into the model presented in this paper, as this feature seems likely to bridge the gap between this model and BabySRL in transitive settings. Lexicalization should also help further distinguish modifiers from arguments and improve the overall accuracy of the model. It would also be interesting to investigate how well this model generalizes to languages besides English. Since the model is able to use the verb position as a semi-permeable boundary between canonical subjects and objects, it may not work as 1091 well in verb-final languages, and thus makes the prediction that filler-gap comprehension may be acquired later in development in such languages due to a greater reliance on hierarchical syntax. Ordering is one of the definining characteristics of a language that must be acquired by learners (e.g. SVO vs SOV), and this work shows that filler-gap comprehension can be acquired as a byproduct of learning orderings rather than having to resort to higher-order syntax. Note that this model cannot capture the constraints on filler-gap usage which require a hierarchical grammar (e.g. subjacency), but such knowledge is really only needed for successful production of filler-gap constructions, which occurs much later (around 5 years; de Villiers and Roeper, 1995). Further, the kind of ordering system proposed in this paper may form an initial basis for learning such grammars (Jackendoffand Wittenberg, in press). 8 Acknowledgements Thanks to Peter Culicover, William Schuler, Laura Wagner, and the attendees of the OSU 2013 Fall Linguistics Colloquium Fest for feedback on this work. This work was partially funded by an OSU Dept. of Linguistics Targeted Investment for Excellence (TIE) grant for collaborative interdisciplinary projects conducted during the academic year 2012-13. References Nameera Akhtar. 1999. Acquiring basic word order: evidence for data-driven learning of syntactic structure. Journal of Child Language, 26:339–356. Sophia Bello. 2012. Identifying indirect objects in French: An elicitation task. In Proceedings of the 2012 annual conference of the Canadian Linguistic Association. Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural Language Processing with Python: Analyzing Text with the Natural Language Toolkit. O’Reilly, Beijing. Paul Boersma. 1997. How we learn variation, optionality, and probability. Proceedings of the Institute of Phonetic Sciences of the University of Amsterdam, 21:43–58. Harr Chen, S.R.K. Branavan, Regina Barzilay, and David R. Karger. 2009. Content modeling using latent permutations. Journal of Artificial Intelligence Research, 36:129–163. Michael Connor, Yael Gertner, Cynthia Fisher, and Dan Roth. 2008. Baby srl: Modeling early language acquisition. In Proceedings of the Twelfth Conference on Computational Natural Language Learning. Michael Connor, Yael Gertner, Cynthia Fisher, and Dan Roth. 2009. Minimally supervised model of early language acquisition. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning. Michael Connor, Yael Gertner, Cynthia Fisher, and Dan Roth. 2010. Starting from scratch in semantic role labelling. In Proceedings of ACL 2010. Peter Culicover. 2013. Explaining syntax: representations, structures, and computation. Oxford University Press. Jill de Villiers and Thomas Roeper. 1995. Barriers, binding, and acquisition of the dp-np distinction. Language Acquisition, 4(1):73–104. Holger Diessel and Michael Tomasello. 2001. The acquisition of finite complement clauses in english: A corpus-based analysis. Cognitive Linguistics, 12:1–45. Annie Gagliardi and Jeffrey Lidz. 2010. Morphosyntactic cues impact filler-gap dependency resolution in 20- and 30-month-olds. In Poster session of BUCLD35. Annie Gagliardi, Tara M. Mease, and Jeffrey Lidz. 2014. Discontinuous development in the acquisition of filler-gap dependencies: Evidence from 15and 20-montholds. Harvard unpublished manuscript: http://www.people.fas.harvard.edu/∼gagliardi. Yael Gertner and Cynthia Fisher. 2012. Predicted errors in children’s early sentence comprehension. Cognition, 124:85–94. Edward Gibson. 1998. Linguistic complexity: Locality of syntactic dependencies. Cognition, 68(1):1–76. Lila R. Gleitman. 1990. The structural sources of verb meanings. Language Acquisition, 1:3–55. Adele E. Goldberg, Devin Casenhiser, and Nitya Sethuraman. 2004. Learning argument structure generalizations. Cognitive Linguistics, 14(3):289–316. Sharon Goldwater and Tom Griffiths. 2007. A fully Bayesian approach to unsupervised partof-speech tagging. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics. Ray Jackendoffand Eva Wittenberg. in press. What you can say without syntax: A hierarchy of grammatical complexity. In Fritz Newmeyer and Lauren Preston, editors, Measuring Linguistic Complexity. Oxford University Press. Aravind K. Joshi, K. Vijay Shanker, and David Weir. 1990. The convergence of mildly contextsensitive grammar formalisms. Technical Report MS-CIS-90-01, Department of Computer and Information Science, University of Pennsylvania. 1092 Dan Klein and Christopher D. Manning. 2004. Corpus-based induction of syntactic structure: Models of dependency and constituency. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics. Tom Kwiatkowski, Sharon Goldwater, Luke S. Zettlemoyer, and Mark Steedman. 2012. A probabilistic model of syntactic and semantic acquisition from child-directed utterances and their meanings. In Proceedings of EACL 2012. Brian MacWhinney. 2000. The CHILDES project: Tools for analyzing talk. Lawrence Elrbaum Associates, Mahwah, NJ, third edition. David Marr. 1982. Vision. A Computational Investigation into the Human Representation and Processing of Visual Information. W.H. Freeman and Company. Letitia R. Naigles. 1990. Children use syntax to learn verb meanings. The Journal Child Language, 17:357–374. Colin Phillips. 2010. Some arguments and nonarguments for reductionist accounts of syntactic phenomena. Language and Cognitive Processes, 28:156–187. Martin Pickering and Guy Barry. 1991. Sentence processing without empty categories. Language and Cognitive Processes, 6(3):229–259. Vasin Punyakanok, Dan Roth, and Wen-tau Yih. 2008. The importance of syntactic parsing and inference in semantic role labeling. Computational Linguistics, 34(2):257–287. John R. Ross. 1967. Constraints on Variables in Syntax. Ph.D. thesis, Massachusetts Institute of Technology. William Schuler. 2011. Effects of filler-gap dependencies on working memory requirements for parsing. In Proceedings of COGSCI, pages 501– 506, Austin, TX. Cognitive Science Society. Amanda Seidl, George Hollich, and Peter W. Jusczyk. 2003. Early understanding of subject and object wh-questions. Infancy, 4(3):423–436. Rushen Shi, Janet F. Werker, and James L. Morgan. 1999. Newborn infants’ sensitivity to perceptual cues to lexical and grammatical words. Cognition, 72(2):B11–B21. Katherine Thatcher, Holly Branigan, Janet McLean, and Antonella Sorace. 2008. Children’s early acquisition of the passive: Evidence from syntactic priming. In Proceedings of the Child Language Seminar 2007, pages 195–205, University of Reading. Ivan Titov and Alexandre Klementiev. 2012. Crosslingual induction of semantic roles. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (ACL2011). Sandra R. Waxman and Amy E. Booth. 2001. Seeing pink elephants: Fourteen-month-olds’ interpretations of novel nouns and adjectives. Cognitive Psychology, 43:217–242. Sylvia Yuan, Cynthia Fisher, and Jesse Snedeker. 2012. Counting the nouns: Simple structural cues to verb meaning. Child Development, 83(4):1382–1399. 1093
2014
102
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1094–1103, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Nonparametric Learning of Phonological Constraints in Optimality Theory Gabriel Doyle Department of Linguistics UC San Diego La Jolla, CA, USA 92093 [email protected] Klinton Bicknell Department of Linguistics Northwestern University Evanston, IL, USA 60208 [email protected] Roger Levy Department of Linguistics UC San Diego La Jolla, CA, USA 92093 [email protected] Abstract We present a method to jointly learn features and weights directly from distributional data in a log-linear framework. Specifically, we propose a non-parametric Bayesian model for learning phonological markedness constraints directly from the distribution of input-output mappings in an Optimality Theory (OT) setting. The model uses an Indian Buffet Process prior to learn the feature values used in the loglinear method, and is the first algorithm for learning phonological constraints without presupposing constraint structure. The model learns a system of constraints that explains observed data as well as the phonologically-grounded constraints of a standard analysis, with a violation structure corresponding to the standard constraints. These results suggest an alternative data-driven source for constraints instead of a fully innate constraint set. 1 Introduction Many aspects of human cognition involve the interaction of constraints that push a decision-maker toward different options, whether in something so trivial as choosing a movie or so important as a fight-or-flight response. These constraint-driven decisions can be modeled with a log-linear system. In these models, a set of constraints is weighted and their violations are used to determine a probability distribution over outcomes. But where do these constraints come from? We consider this question by examining the dominant framework in modern phonology, Optimality Theory (Prince and Smolensky, 1993, OT), implemented in a log-linear framework, MaxEnt OT (Goldwater and Johnson, 2003), with output forms’ probabilities based on a weighted sum of constraint violations. OT analyses generally assume that the constraints are innate and universal, both to obviate the problem of learning constraints’ identities and to limit the set of possible languages. We propose a new approach: to learn constraints with limited innate phonological knowledge by identifying sets of constraint violations that explain the observed distributional data, instead of selecting constraints from an innate set of constraint definitions. Because the constraints are identified as sets of violations, this also permits constraints specific to a given language to be learned. This method, which we call IBPOT, uses an Indian Buffet Process (IBP) prior to define the space of possible constraint violation matrices, and uses Bayesian reasoning to identify constraint matrices likely to have generated the observed data. In identifying constraints solely by their extensional violation profiles, this method does not directly identify the intensional definitions of the identified constraints, but to the extent that the resulting violation profiles are phonologically interpretable, we may conclude that the data themselves guide constraint identification. We test IBPOT on tongue-root vowel harmony in Wolof, a West African language. The set of constraints learned by the model satisfy two major goals: they explain the data as well as the standard phonological analysis, and their violation structures correspond to the standard constraints. This suggests an alternative data-driven genesis for constraints, rather than the traditional assumption of fully innate constraints. 2 Phonology and Optimality Theory 2.1 OT structure Optimality Theory has been used for constraintbased analysis of many areas of language, but we focus on its most successful application: phonology. We consider an OT analysis of the mappings 1094 between underlying forms and their phonological manifestations – i.e., mappings between forms in the mental lexicon and the actual vocalized forms of the words.1 Stated generally, an OT system takes some input, generates a set of candidate outputs, determines what constraints each output violates, and then selects a candidate output with a relatively unobjectionable violation profile. To do this, an OT system contains four major components: a generator GEN, which generates candidate output forms for the input; a set of constraints CON, which penalize candidates; a evaluation method EVAL, which selects an winning candidate; and H, a language-particular weighting of constraints that EVAL uses to determine the winning candidate. Previous OT work has focused on identifying the appropriate formulation of EVAL and the values and acquisition of H, while taking GEN and CON as given. Here, we expand the learning task by proposing an acquisition method for CON. To learn CON, we propose a data-driven markedness constraint learning system that avoids both innateness and tractability issues. Unlike previous OT learning methods, which assume known constraint definitions and only learn the relative strength of these constraints, the IBPOT learns constraint violation profiles and weights for them simultaneously. The constraints are derived from sets of violations that effectively explain the observed data, rather than being selected from a preexisting set of possible constraints. 2.2 OT as a weighted-constraint method Although all OT systems share the same core structure, different choices of EVAL lead to different behaviors. In IBPOT, we use the loglinear EVAL developed by Goldwater and Johnson (2003) in their MaxEnt OT system. MEOT extends traditional OT to account for variation (cases in which multiple candidates can be the winner), as well as gradient/probabilistic productions (Anttila, 1997) and other constraint interactions (e.g., cumulativity) that traditional OT cannot handle (Keller, 2000). MEOT also is motivated by the general MaxEnt framework, whereas most other OT formulations are ad hoc constructions specific to phonology. In MEOT, each constraint Ci is associated with 1Although phonology is usually framed in terms of sound, sign languages also have components that serve equivalent roles in the physical realization of signs (Stokoe, 1960). a weight wi < 0. (Weights are always negative in OT; a constraint violation can never make a candidate more likely to win.) For a given inputcandidate pair (x, y), fi(y, x) is the number of violations of constraint Ci by the pair. As a maximum entropy model, the probability of y given x is proportional to the exponential of the weighted sum of violations, P i wifi(y, x). If Y(x) is the set of all output candidates for the input x, then the probability of y as the winning output is: p(y|x) = exp (P i wifi(y, x)) P z∈Y(x) exp (P i wifi(z, x)) (1) This formulation represents a probabilistic extension of the traditional formulation of OT (Prince and Smolensky, 1993). Traditionally, constraints form a strict hierarchy, where a single violation of a high-ranked constraint is worse than any number of violations of lower-ranked constraints. Traditional OT is also deterministic, with the optimal candidate always selected. In MEOT, the constraint weights define hierarchies of varying strictness, and some probability is assigned to all candidates. If constraints’ weights are close together, multiple violations of lower-weighted constraints can reduce a candidate’s probability below that of a competitor with a single high-weight violation. As the distance between weights in MEOT increases, the probability of a suboptimal candidate being chosen approaches zero; thus the traditional formulation is a limit case of MEOT. 2.3 OT in practice Figure 1 shows tableaux, a visualization for OT, applied in Wolof (Archangeli and Pulleyblank, 1994; Boersma, 1999). We are interested in four Wolof constraints that combine to induce vowel harmony: *I, PARSE[rtr], HARMONY, and PARSE[atr]. The meaning of these constraints will be discussed in Sect. 4.1; for now, we will only consider their violation profiles. Each column represents a constraint, with weights decreasing leftto-right. Each tableau looks at a single input form, noted in the top-left cell: ete, EtE, Ite, or itE. Each row is a candidate output form. A black cell indicates that the candidate, or input-candidate pair, violates the constraint in that column.2 A white cell indicates no violation. Grey stripes are 2In general, a constraint can be violated multiple times by a given candidate, but we will be using binary constraints (violated or not) in this work. See Sect. 5.2 for further discussion. 1095 ete *ɪ Parse(rtr) Harmony Parse(atr) Score ɪte *ɪ Parse(rtr) Harmony Parse(atr) Score ete 0 ite -32 ɛte -24 ɪte -80 etɛ -24 itɛ -56 ɛtɛ -8 ɪtɛ -72 ɛtɛ *ɪ Parse(rtr) Harmony Parse(atr) Score itɛ *ɪ Parse(rtr) Harmony Parse(atr) Score ete -32 ite -32 ɛte -48 ɪte -120 etɛ -48 itɛ -16 ɛtɛ 0 ɪtɛ -72 Figure 1: Tableaux for the Wolof input forms ete, EtE, Ite, and itE. Black indicates violation, white no violation. Scores are calculated for a MaxEnt OT system with constraint weights of -64, -32, -16, and -8, approximating a traditional hierarchical OT design. Values of grey-striped cells have negligible effects on the distribution (see Sect. 4.3). overlaid on cells whose value will have a negligible impact on the distribution due to the values of higher-ranked constraint. Constraints fall into two categories, faithfulness and markedness, which differ in what information they use to assign violations. Faithfulness constraints penalize mismatches between the input and output, while markedness constraints consider only the output. Faithfulness violations include phoneme additions or deletions between the input and output; markedness violations include penalizing specific phonemes in the output form, regardless of whether the phoneme is present in the input. In MaxEnt OT, each constraint has a weight, and the candidates’ scores are the sums of the weights of violated constraints. In the ete tableau at top left, output ete has no violations, and therefore a score of zero. Outputs Ete and etE violate both HARMONY (weight 16) and PARSE[atr] (weight 8), so their scores are 24. Output EtE violates PARSE[atr], and has score 8. Thus the logprobability of output EtE is 1/8 that of ete, and the log-probability of disharmonious Ete and etE are each 1/24 that of ete. As the ratio between scores increases, the log-probability ratios can become arbitrarily close to zero, approximating the deterministic situation of traditional OT. 2.4 Learning Constraints Choosing a winning candidate presumes that a set of constraints CON is available, but where do these constraints come from? The standard assumption within OT is that CON is innate and universal. But in the absence of direct evidence of innate constraints, we should prefer a method that can derive the constraints from cognitivelygeneral learning over one that assumes they are pre-specified. Learning appropriate model features has been an important idea in the development of constraint-based models (Della Pietra et al., 1997). The innateness assumption can induce tractability issues as well. The strictest formulation of innateness posits that virtually all constraints are shared across all languages, even when there is no evidence for the constraint in a particular language (Tesar and Smolensky, 2000). Strict universality is undermined by the extremely large set of constraints it must weight, as well as the possible existence of language-particular constraints (Smith, 2004). A looser version of universality supposes that constraints are built compositionally from a set of constraint templates or primitives or phonological features (Hayes, 1999; Smith, 2004; Idsardi, 2006; Riggle, 2009). This version allows language-particular constraints, but it comes with a computational cost, as the learner must be able to generate and evaluate possible constraints while learning the language’s phonology. Even with relatively simple constraint templates, such as the phonological constraint learner of Hayes and Wilson (2008), the number of possible constraints expands exponentially. Depending on the specific formulation of the constraints, the constraint identification problem may even be NP-hard (Idsardi, 2006; Heinz et al., 2009). Our approach of casting the learning problem as one of identifying violation profiles is an attempt to determine the amount that can be learned about the active constraints in a paradigm without hypothesizing intensional constraint definitions. The violation profile informa1096 tion used by our model could then be used to narrow the search space for intensional constraints, either by performing post-hoc analysis of the constraints identified by our model or by combining intensional constraint search into the learning process. We discuss each of these possibilities in Section 5.2. Innateness is less of a concern for faithfulness than markedness constraints. Faithfulness violations are determined by the changes between an input form and a candidate, yielding an independent motivation for a universal set of faithfulness constraints (McCarthy, 2008). Some markedness constraints can also be motivated in a universal manner (Hayes, 1999), but many markedness constraints lack such grounding.3 As such, it is unclear where a universal set of markedness constraints would come from. 3 The IBPOT Model 3.1 Structure The IBPOT model defines a generative process for mappings between input and output forms based on three latent variables: the constraint violation matrices F (faithfulness) and M (markedness), and the weight vector w. The cells of the violation matrices correspond to the number of violations of a constraint by a given input-output mapping. Fijk is the number of violations of faithfulness constraint Fk by input-output pair type (xi, yj); Mjl is the number of violations of markedness constraint M·l by output candidate yj. Note that M is shared across inputs, as Mjl has the same value for all input-output pairs with output yj. The weight vector w provides weight for both F and M. Probabilities of output forms are given by a log-linear function: p(yj|xi) = exp (P k wkFijk + P l wlMjl) P yz∈Y(xi) exp (P k wkFizk + P l wlMzl) (2) Note that this is the same structure as Eq. 1 but with faithfulness and markedness constraints listed separately. As discussed in Sect. 2.4, we assume that F is known as part of the output of GEN (Riggle, 2009). The goal of the IBPOT model is to 3McCarthy (2008, §4.8) gives examples of “ad hoc” intersegmental constraints. Even well-known constraint types, such as generalized alignment, can have disputed structures (Hyde, 2012). learn the markedness matrix M and weights w for both the markedness and faithfulness constraints. As for M, we need a non-parametric prior, as there is no inherent limit to the number of markedness constraints a language will use. We use the Indian Buffet Process (Griffiths and Ghahramani, 2005), which defines a proper probability distribution over binary feature matrices with an unbounded number of columns. The IBP can be thought of as representing the set of dishes that diners eat at an infinite buffet table. Each diner (i.e., output form) first draws dishes (i.e., constraint violations) with probability proportional to the number of previous diners who drew it: p(Mjl = 1|{Mzl}z<j) = nl/j. After choosing from the previously taken dishes, the diner can try additional dishes that no previous diner has had. The number of new dishes that the j-th customer draws follows a Poisson(α/j) distribution. The complete specification of the model is then: M ∼IBP(α); Y(xi) = Gen(xi) w ∼−Γ(1, 1); y|xi ∼LogLin(M, F, w, Y(xi)) 3.2 Inference To perform inference in this model, we adopt a common Markov chain Monte Carlo estimation procedure for IBPs (G¨or¨ur et al., 2006; Navarro and Griffiths, 2007). We alternate approximate Gibbs sampling over the constraint matrix M, using the IBP prior, with a Metropolis-Hastings method to sample constraint weights w. We initialize the model with a randomly-drawn markedness violation matrix M and weight vector w. To learn, we iterate through the output forms yj; for each, we split M−j· into “represented” constraints (those that are violated by at least one output form other than yj) and “non-represented” constraints (those violated only by yj). For each represented constraint M·l, we re-sample the value for the cell Mjl. All non-represented constraints are removed, and we propose new constraints, violated only by yj, to replace them. After each iteration through M, we use Metropolis-Hastings to update the weight vector w. Represented constraint sampling We begin by resampling Mjl for all represented constraints M·l, conditioned on the rest of the violations (M−(jl), F) and the weights w. This is the sampling counterpart of drawing existing features in the IBP generative process. By Bayes’ Rule, the 1097 posterior probability of a violation is proportional to product of the likelihood p(Y |Mjl = 1, M−jl, F, w) from Eq. 2 and the IBP prior probability p(Mjl = 1|M−jl) = n−jl/n, where n−jl is the number of outputs other than yj that violate constraint M·l. Non-represented constraint sampling After sampling the represented constraints for yj, we consider the addition of new constraints that are violated only by yj. This is the sampling counterpart to the Poisson draw for new features in the IBP generative process. Ideally, this would draw new constraints from the infinite feature matrix; however, this requires marginalizing the likelihood over possible weights, and we lack an appropriate conjugate prior for doing so. We approximate the infinite matrix with a truncated Bernoulli draw over unrepresented constraints (G¨or¨ur et al., 2006). We consider in each sample at most K∗ new constraints, with weights based on the auxiliary vector w∗. This approximation retains the unbounded feature set of the IBP, as repeated sampling can add more and more constraints without limit. The auxiliary vector w∗contains the weights of all the constraints that have been removed in the previous step. If the number of constraints removed is less than K∗, w∗is filled out with draws from the prior distribution over weights. We then consider adding any subset of these new constraints to M, each of which would be violated only by yj. Let M∗represent a (possibly empty) set of constraints paired with a subset of w∗. The posterior probability of drawing M∗from the truncated Bernoulli distribution is the product of the prior probability of M∗ α K∗ NY + α K∗  and the likelihood p(Y |M∗, w∗, M, w, F), including the new constraints M∗. Weight sampling After sampling through all candidates, we use Metropolis-Hastings to estimate new weights for both constraint matrices. Our proposal distribution is Gamma(wk2/η, η/wk), with mean wk and mode wk − η wk (for wk > 1). Unlike Gibbs sampling on the constraints, which occurs only on markedness constraints, weights are sampled for both markedness and faithfulness features. 4 Experiment 4.1 Wolof vowel harmony We test the model by learning the markedness constraints driving Wolof vowel harmony (Archangeli and Pulleyblank, 1994). Vowel harmony in general refers to a phonological phenomenon wherein the vowels of a word share certain features in the output form even if they do not share them in the input. In the case of Wolof, harmony encourages forms that have consistent tongue root positions. The Wolof vowel system has two relevant features, tongue root position and vowel height. The tongue root can either be advanced (ATR) or retracted (RTR), and the body of the tongue can be in the high, middle, or low part of the mouth. These features define six vowels: high mid low ATR i e @ RTR I E a We test IBPOT on the harmony system provided in the Praat program (Boersma, 1999), previously used as a test case by Goldwater and Johnson (2003) for MEOT learning with known constraints. This system has four constraints:4 • Markedness: – *I: do not have I (high RTR vowel) – HARMONY: do not have RTR and ATR vowels in the same word • Faithfulness: – PARSE[rtr]: do not change RTR input to ATR output – PARSE[atr]: do not change ATR input to RTR output These constraints define the phonological standard that we will compare IBPOT to, with a ranking from strongest to weakest of *I >> PARSE[rtr] >> HARMONY >> PARSE[atr]. Under this ranking, Wolof harmony is achieved by changing a disharmonious ATR to an RTR, unless this creates an I vowel. We see this in Figure 1, where three of the four winners are harmonic, but with input itE, harmony would require violating one of the two higher-ranked constraints. As in previous MEOT work, all Wolof candidates are faithful 4The version in Praat includes a fifth constraint, but its value never affects the choice of output in our data and is omitted in this analysis. 1098 with respect to vowel height, either because height changes are not considered by GEN, or because of a high-ranked faithfulness constraint blocking height changes.5 The Wolof constraints provide an interesting testing ground for the model, because it is a small set of constraints to be learned, but contains the HARMONY constraint, which can be violated by non-adjacent segments. Non-adjacent constraints are difficult for string-based approaches because of the exponential number of possible relationships across non-adjacent segments. However, the Wolof results show that by learning violations directly, IBPOT does not encounter problems with non-adjacent constraints. The Wolof data has 36 input forms, each of the form V1tV2, where V1 and V2 are vowels that agree in height. Each input form has four candidate outputs, with one output always winning. The outputs appear for multiple inputs, as shown in Figure 1. The candidate outputs are the four combinations of tongue-roots for the given vowel heights; the inputs and candidates are known to the learner. We generate simulated data by observing 1000 instances of the winning output for each input.6 The model must learn the markedness constraints *I and HARMONY, as well as the weights for all four constraints. We make a small modification to the constraints for the test data: all constraints are limited to binary values. For constraints that can be violated multiple times by an output (e.g., *I twice by ItI), we use only a single violation. This is necessary in the current model definition because the IBP produces a prior over binary matrices. We generate the simulated data using only single violations of each constraint by each output form. Overcoming the binarity restriction is discussed in Sect. 5.2. 4.2 Experiment Design We run the model for 10000 iterations, using deterministic annealing through the first 2500 it5In the present experiment, we assume that GEN does not generate candidates with unfaithful vowel heights. If unfaithful vowel heights were allowed by GEN, these unfaithful candidates would incur a violation approximately as strong as *I, as neither unfaithful-height candidates nor I candidates are attested in the Wolof data. 6Since data, matrix, and weight likelihoods all shape the learned constraints, there must be enough data for the model to avoid settling for a simple matrix that poorly explains the data. This represents a similar training set size to previous work (Goldwater and Johnson, 2003; Boersma and Hayes, 2001). erations. The model is initialized with a random markedness matrix drawn from the IBP and weights from the exponential prior. We ran versions of the model with parameter settings between 0.01 and 1 for α, 0.05 and 0.5 for η, and 2 and 5 for K∗. All these produced quantitatively similar results; we report values for α = 1, η = 0.5, and K∗= 5, which provides the least bias toward small constraint sets. To establish performance for the phonological standard, we use the IBPOT learner to find constraint weights but do not update M. The resultant learner is essentially MaxEnt OT with the weights estimated through Metropolis sampling instead of gradient ascent. This is done so that the IBPOT weights and phonological standard weights are learned by the same process and can be compared. We use the same parameters for this baseline as for the IBPOT tests. The results in this section are based on nine runs each of IBPOT and MEOT; ten MEOT runs were performed but one failed to converge and was removed from analysis. 4.3 Results A successful set of learned constraints will satisfy two criteria: achieving good data likelihood (no worse than the phonological-standard constraints) and acquiring constraint violation profiles that are phonologically interpretable. We find that both of these criteria are met by IBPOT on Wolof. Likelihood comparison First, we calculate the joint probability of the data and model given the priors, p(Y, M, w|F, α), which is proportional to the product of three terms: the data likelihood p(Y |M, F, w), the markedness matrix probability p(M|α), and the weight probability p(w). We present both the mean and MAP values for these over the final 1000 iterations of each run. Results are shown in Table 1. All eight differences are significant according to t-tests over the nine runs. In all cases but mean M, the IBPOT method has a better log-probability. The most important differences are those in the data probabilities, as the matrix and weight probabilities are reflective primarily of the choice of prior. By both measures, the IBPOT constraints explain the observed data better than the phonologically standard constraints. Interestingly, the mean M probability is lower for IBPOT than for the phonological standard. Though the phonologically standard constraints 1099 MAP Mean IBPOT PS IBPOT PS Data -1.52 -3.94 -5.48 -9.23 M -51.7 -53.3 -54.7 -53.3 w -44.2 -71.1 -50.6 -78.1 Joint -97.4 -128.4 -110.6 -140.6 Table 1: Data, markedness matrix, weight vector, and joint log-probabilities for the IBPOT and the phonological standard constraints. MAP and mean estimates over the final 1000 iterations for each run. All IBPOT/PS differences are significant (p < .005 for MAP M; p < .001 for others). exist independently of the IBP prior, they fit the prior better than the average IBPOT constraints do. This shows that the IBP’s prior preferences can be overcome in order to have constraints that better explain the data. Constraint comparison Our second criterion is the acquisition of meaningful constraints, that is, ones whose violation profiles have phonologically-grounded explanations. IBPOT learns the same number of markedness constraints as the phonological standard (two); over the final 1000 iterations of the model runs, 99.2% of the iterations had two markedness constraints, and the rest had three. Turning to the form of these constraints, Figure 2 shows violation profiles from the last iteration of a representative IBPOT run.7 Because vowel heights must be faithful between input and output, the Wolof data is divided into nine separate paradigms, each containing the four candidates (ATR/RTR × ATR/RTR) for the vowel heights in the input. The violations on a given output form only affect probabilities within its paradigm. As a result, learned constraints are consistent within paradigms, but across paradigms, the same constraint may serve different purposes. For instance, the strongest learned markedness constraint, shown as M1 in Figure 2, has the same violations as the top-ranked constraint that actively distinguishes between candidates in each paradigm. For the five paradigms with at least one high vowel (the top row and left column), M1 has the same violations as *I, as *I penalizes some but not all of the candidates. In the 7Specifically, from the run with the median joint posterior. other four paradigms, *I penalizes none of the candidates, and the IBPOT learner has no reason to learn it. Instead, it learns that M1 has the same violations as HARMONY, which is the highest-weighted constraint that distinguishes between candidates in these paradigms. Thus in the high-vowel paradigms, M1 serves as *I, while in the low/mid-vowel paradigms, it serves as HARMONY. The lower-weighted M2 is defined noisily, as the higher-ranked M1 makes some values of M2 inconsequential. Consider the top-left paradigm of Figure 2, the high-high input, in which only one candidate does not violate M1 (*I). Because M1 has a much higher weight than M2, a violation of M2 has a negligible effect on a candidate’s probability.8 In such cells, the constraint’s value is influenced more by the prior than by the data. These inconsequential cells are overlaid with grey stripes in Figure 2. The meaning of M2, then, depends only on the consequential cells. In the high-vowel paradigms, M2 matches HARMONY, and the learned and standard constraints agree on all consequential violations, despite being essentially at chance on the indistinguishable violations (58%). On the non-high paradigms, the meaning of M2 is unclear, as HARMONY is handled by M1 and *I is unviolated. In all four paradigms, the model learns that the RTRRTR candidate violates M2 and the ATR-ATR candidate does not; this appears to be the model’s attempt to reinforce a pattern in the lowest-ranked faithfulness constraint (PARSE[atr]), which the ATR-ATR candidate never violates. Thus, while the IBPOT constraints are not identical to the phonologically standard ones, they reflect a version of the standard constraints that is consistent with the IBPOT framework.9 In paradigms where each markedness constraint distinguishes candidates, the learned constraints match the standard constraints. In paradigms where only one constraint distinguishes candidates, the top learned constraint matches it and the second learned constraint exhibits a pattern consistent with a low-ranked faithfulness constraint. 8Given the learned weights in Fig. 2, if the losing candidate violates M1, its probability changes from 10−12 when the preferred candidate does not violate M2 to 10−8 when it does. 9In fact, it appears this constraint organization is favored by IBPOT as it allows for lower weights, hence the large difference in w log-probability in Table 1. 1100 *ɪ Harmony M1 M2 *ɪ Harmony M1 M2 *ɪ Harmony M1 M2 iti eti əti ɪti ɛti ati itɪ etɪ ətɪ ɪtɪ ɛtɪ atɪ ite ete əte ɪte ɛte ate itɛ etɛ ətɛ ɪtɛ ɛtɛ atɛ itə etə ətə ɪtə ɛtə atə ita eta əta ɪta ɛta ata Learned Phono. Std. hi hi hi mid hi lo Phono. Std. Learned mid lo mid mid mid hi Phono. Std. Learned lo hi lo mid lo lo Figure 2: Phonologically standard (*I, HARMONY) and learned (M1,M2) constraint violation profiles for the output forms. Learned weights for the standard constraints are -32.8 and -15.3; for M1 and M2, they are -26.5 and -8.4. Black indicates violation, white no violation. Grey stripes indicate cells whose values have negligible effects on the probability distribution. 5 Discussion and Future Work 5.1 Relation to phonotactic learning Our primary finding from IBPOT is that it is possible to identify constraints that are both effective at explaining the data and representative of theorized phonologically-grounded constraints, given only input-output mappings and faithfulness violations. Furthermore, these constraints are successfully acquired without any knowledge of the phonological structure of the data beyond the faithfulness violation profiles. The model’s ability to infer constraint violation profiles without theoretical constraint structure provides an alternative solution to the problems of the traditionally innate and universal OT constraint set. As it jointly learns constraints and weights, the IBPOT model calls to mind Hayes and Wilson’s (2008) joint phonotactic learner. Their learner also jointly learns weights and constraints, but directly selects its constraints from a compositional grammar of constraint definitions. This limits their learner in practice by the rapid explosion in the number of constraints as the maximum constraint definition size grows. By directly learning violation profiles, the IBPOT model avoids this explosion, and the violation profiles can be automatically parsed to identify the constraint definitions that are consistent with the learned profile. The inference method of the two models is different as well; the phonotactic learner selects constraints greedily, whereas the sampling on M in IBPOT asymptotically approaches the posterior. The two learners also address related but different phonological problems. The phonotactic learner considers phonotactic problems, in which only output matters. The constraints learned by Hayes and Wilson’s learner are essentially OT markedness constraints, but their learner does not have to account for varied inputs or effects of faithfulness constraints. 5.2 Extending the learning model IBPOT, as proposed here, learns constraints based on binary violation profiles, defined extensionally. A complete model of constraint acquisition should provide intensional definitions that are phonologically grounded and cover potentially non-binary constraints. We discuss how to extend the model toward these goals. IBPOT currently learns extensional constraints, defined by which candidates do or do not violate the constraint. Intensional definitions are needed to extend constraints to unseen forms. Post hoc violation profile analysis, as in Sect. 4.3, provides a first step toward this goal. Such analysis can be integrated into the learning process using the Rational Rules model (Goodman et al., 2008) to identify likely constraint definitions compositionally. Alternately, phonological knowledge could be integrated into a joint constraint learning process in the form of a naturalness bias on the constraint weights or a phonologically-motivated replacement for the IBP prior. The results presented here use binary constraints, where each candidate violates each constraint only once, a result of the IBP’s restriction to binary matrices. Non-binarity can be handled by using the binary matrix M to indicate whether a candidate violates a constraint, with a second 1101 distribution determining the number of violations. Alternately, a binary matrix can directly capture non-binary constraints; Frank and Satta (1998) converted existing non-binary constraints into a binary OT system by representing non-binary constraints as a set of equally-weighted overlapping constraints, each accounting for one violation. The non-binary harmony constraint, for instance, becomes a set {*(at least one disharmony), *(at least two disharmonies), etc.}. Lastly, the Wolof vowel harmony problem provides a test case with overlaps in the candidate sets for different inputs. This candidate overlap helps the model find appropriate constraint structures. Analyzing other phenomena may require the identification of appropriate abstractions to find this same structural overlap. English regular plurals, for instance, fall into broad categories depending on the features of the stem-final phoneme. IBPOT learning in such settings may require learning an appropriate abstraction as well. 6 Conclusion A central assumption of Optimality Theory has been the existence of a fixed inventory of universal markedness constraints innately available to the learner, an assumption by arguments regarding the computational complexity of constraint identification. However, our results show for the first time that nonparametric, data-driven learning can identify sparse constraint inventories that both accurately predict the data and are phonologically meaningful, providing a serious alternative to the strong nativist view of the OT constraint inventory. Acknowledgments We wish to thank Eric Bakovi´c, Emily Morgan, Mark Mysl´ın, the UCSD Computational Psycholinguistics Lab, the Phon Company, and the reviewers for their discussions and feedback on this work. This research was supported by NSF award IIS-0830535 and an Alfred P. Sloan Foundation Research Fellowship to RL. References Arto Anttila. 1997. Variation in Finnish phonology and morphology. Ph.D. thesis, Stanford U. Diana Archangeli and Douglas Pulleyblank. 1994. Grounded phonology. MIT Press. Paul Boersma. 1999. Empirical tests of the Gradual Learning Algorithm. Linguistic Inquiry, 32:45–86. Paul Boersma and Bruce Hayes. 2001. Optimalitytheoretic learning in the Praat program. In Proceedings of the Institute of Phonetic Sciences of the University of Amsterdam. Stephen Della Pietra, Vincent Della Pietra, and John Lafferty. 1997. Inducing features of random fields. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19:380–393. Robert Frank and Giorgio Satta. 1998. Optimality theory and the generative complexity of constraint violability. Computational Linguistics, 24:307–315. Sharon Goldwater and Mark Johnson. 2003. Learning OT constraint rankings using a Maximum Entropy model. In Proceedings of the Workshop on Variation within Optimality Theory. Noah Goodman, Joshua Tenebaum, Jacob Feldman, and Tom Griffiths. 2008. A rational analysis of rulebased concept learning. Cognitive Science, 32:108– 154. Dilan G¨or¨ur, Frank J¨akel, and Carl Rasmussen. 2006. A choice model with infinitely many latent features. In Proceedings of the 23rd International Conference on Machine Learning. Thomas Griffiths and Zoubin Ghahramani. 2005. Infinite latent feature models and the Indian buffet process. Technical Report 2005-001, Gatsby Computational Neuroscience Unit. Bruce Hayes and Colin Wilson. 2008. A maximum entropy model of phonotactics and phonotactic learning. Linguistic Inquiry, 39:379–440. Bruce Hayes. 1999. Phonetically driven phonology: the role of optimality theory and inductive grounding. In Darnell et al, editor, Formalism and Functionalism in Linguistics, vol. 1. Benjamins. Jeffrey Heinz, Gregory Kobele, and Jason Riggle. 2009. Evaluating the complexity of Optimality Theory. Linguistic Inquiry. Brett Hyde. 2012. Alignment constraints. Natural Language and Linguistic Theory, 30:789–836. William Idsardi. 2006. A simple proof that Optimality Theory is computationally intractable. Linguistic Inquiry, 37:271–275. Frank Keller. 2000. Gradience in grammar: Experimental and computational aspects of degrees of grammaticality. Ph.D. thesis, U. of Edinburgh. John McCarthy. 2008. Doing Optimality Theory. Blackwell. Daniel Navarro and Tom Griffiths. 2007. A nonparametric Bayesian method for inferring features from similarity judgments. In Advances in Neural Information Processing Systems 19. 1102 Alan Prince and Paul Smolensky. 1993. Optimality theory: Constraint interaction in generative grammar. Technical report, Rutgers Center for Cognitive Science. Jason Riggle. 2009. Generating contenders. Rutgers Optimality Archive, 1044. Jennifer Smith. 2004. Making constraints compositional: toward a compositional model of Con. Lingua, 114:1433–1464. William Stokoe. 1960. Sign Language Structure. Linstok Press. Bruce Tesar and Paul Smolensky. 2000. Learnability in Optimality Theory. MIT Press. 1103
2014
103
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1104–1112, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Active Learning with Efficient Feature Weighting Methods for Improving Data Quality and Classification Accuracy Justin Martineau1, Lu Chen2∗, Doreen Cheng3, and Amit Sheth4 1,3 Samsung Research America, Silicon Valley 1,3 75 W Plumeria Dr. San Jose, CA 95134 USA 2,4 Kno.e.sis Center, Wright State University 2,4 3640 Colonel Glenn Hwy. Fairborn, OH 45435 USA 1,3 {justin.m, doreen.c}@samsung.com 2,4 {chen, amit}@knoesis.org Abstract Many machine learning datasets are noisy with a substantial number of mislabeled instances. This noise yields sub-optimal classification performance. In this paper we study a large, low quality annotated dataset, created quickly and cheaply using Amazon Mechanical Turk to crowdsource annotations. We describe computationally cheap feature weighting techniques and a novel non-linear distribution spreading algorithm that can be used to iteratively and interactively correcting mislabeled instances to significantly improve annotation quality at low cost. Eight different emotion extraction experiments on Twitter data demonstrate that our approach is just as effective as more computationally expensive techniques. Our techniques save a considerable amount of time. 1 Introduction Supervised classification algorithms require annotated data to teach the machine, by example, how to perform a specific task. There are generally two ways to collect annotations of a dataset: through a few expert annotators, or through crowdsourcing services (e.g., Amazon’s Mechanical Turk). High-quality annotations can be produced by expert annotators, but the process is usually slow and costly. The latter option is appealing since it creates a large annotated dataset at low cost. In recent years, there have been an increasing number of studies (Su et al., 2007; Kittur et al., 2008; Sheng et al., 2008; Snow et al., 2008; CallisonBurch, 2009) using crowdsourcing for data annotation. However, because annotators that are recruited this way may lack expertise and motivation, the annotations tend to be more noisy and ∗This author’s research was done during an internship with Samsung Research America. unreliable, which significantly reduces the performance of the classification model. This is a challenge faced by many real world applications – given a large, quickly and cheaply created, low quality annotated dataset, how can one improve its quality and learn an accurate classifier from it? Re-annotating the whole dataset is too expensive. To reduce the annotation effort, it is desirable to have an algorithm that selects the most likely mislabeled examples first for re-labeling. The process of selecting and re-labeling data points can be conducted with multiple rounds to iteratively improve the data quality. This is similar to the strategy of active learning. The basic idea of active learning is to learn an accurate classifier using less training data. An active learner uses a small set of labeled data to iteratively select the most informative instances from a large pool of unlabeled data for human annotators to label (Settles, 2010). In this work, we borrow the idea of active learning to interactively and iteratively correct labeling errors. The crucial step is to effectively and efficiently select the most likely mislabeled instances. An intuitive idea is to design algorithms that classify the data points and rank them according to the decreasing confidence scores of their labels. The data points with the highest confidence scores but conflicting preliminary labels are most likely mislabeled. The algorithm should be computationally cheap as well as accurate, so it fits well with active learning and other problems that require frequent iterations on large datasets. Specifically, we propose a novel non-linear distribution spreading algorithm, which first uses Delta IDF technique (Martineau and Finin, 2009) to weight features, and then leverages the distribution of Delta IDF scores of a feature across different classes to efficiently recognize discriminative features for the classification task in the presence of mislabeled data. The idea is that some effective fea1104 tures may be subdued due to label noise, and the proposed techniques are capable of counteracting such effect, so that the performance of classification algorithms could be less affected by the noise. With the proposed algorithm, the active learner becomes more accurate and resistant to label noise, thus the mislabeled data points can be more easily and accurately identified. We consider emotion analysis as an interesting and challenging problem domain of this study, and conduct comprehensive experiments on Twitter data. We employ Amazon’s Mechanical Turk (AMT) to label the emotions of Twitter data, and apply the proposed methods to the AMT dataset with the goals of improving the annotation quality at low cost, as well as learning accurate emotion classifiers. Extensive experiments show that, the proposed techniques are as effective as more computational expensive techniques (e.g, Support Vector Machines) but require significantly less time for training/running, which makes it well-suited for active learning. 2 Related Work Research on handling noisy dataset of mislabeled instances has focused on three major groups of techniques: (1) noise tolerance, (2) noise elimination, and (3) noise correction. Noise tolerance techniques aim to improve the learning algorithm itself to avoid over-fitting caused by mislabeled instances in the training phase, so that the constructed classifier becomes more noise-tolerant. Decision tree (Mingers, 1989; Vannoorenberghe and Denoeux, 2002) and boosting (Jiang, 2001; Kalaia and Servediob, 2005; Karmaker and Kwek, 2006) are two learning algorithms that have been investigated in many studies. Mingers (1989) explores pruning methods for identifying and removing unreliable branches from a decision tree to reduce the influence of noise. Vannoorenberghe and Denoeux (2002) propose a method based on belief decision trees to handle uncertain labels in the training set. Jiang (2001) studies some theoretical aspects of regression and classification boosting algorithms in dealing with noisy data. Kalaia and Servediob (2005) present a boosting algorithm which can achieve arbitrarily high accuracy in the presence of data noise. Karmaker and Kwek (2006) propose a modified AdaBoost algorithm – ORBoost, which minimizes the impact of outliers and becomes more tolerant to class label noise. One of the main disadvantages of noise tolerance techniques is that they are learning algorithm-dependent. In contrast, noise elimination/correction approaches are more generic and can be more easily applied to various problems. A large number of studies have explored noise elimination techniques (Brodley and Friedl, 1999; Verbaeten and Van Assche, 2003; Zhu et al., 2003; Muhlenbach et al., 2004; Guan et al., 2011), which identifies and removes mislabeled examples from the dataset as a pre-processing step before building classifiers. One widely used approach (Brodley and Friedl, 1999; Verbaeten and Van Assche, 2003) is to create an ensemble classifier that combines the outputs of multiple classifiers by either majority vote or consensus, and an instance is tagged as mislabeled and removed from the training set if it is classified into a different class than its training label by the ensemble classifier. The similar approach is adopted by Guan et al. (2011) and they further demonstrate that its performance can be significantly improved by utilizing unlabeled data. To deal with the noise in large or distributed datasets, Zhu et al. (2003) propose a partition-based approach, which constructs classification rules from each subset of the dataset, and then evaluates each instance using these rules. Two noise identification schemes, majority and non-objection, are used to combine the decision from each set of rules to decide whether an instance is mislabeled. Muhlenbach et al. (2004) propose a different approach, which represents the proximity between instances in a geometrical neighborhood graph, and an instance is considered suspect if in its neighborhood the proportion of examples of the same class is not significantly greater than in the dataset itself. Removing mislabeled instances has been demonstrated to be effective in increasing the classification accuracy in prior studies, but there are also some major drawbacks. For example, useful information can be removed with noise elimination, since annotation errors are likely to occur on ambiguous instances that are potentially valuable for learning algorithms. In addition, when the noise ratio is high, there may not be adequate amount of data remaining for building an accurate classifier. The proposed approach does not suffer these limitations. Instead of eliminating the mislabeled examples 1105 from training data, some researchers (Zeng and Martinez, 2001; Rebbapragada et al., 2012; Laxman et al., 2013) propose to correct labeling errors either with or without consulting human experts. Zeng and Martinez (2001) present an approach based on backpropagation neural networks to automatically correct the mislabeled data. Laxman et al. (2012) propose an algorithm which first trains individual SVM classifiers on several small, class-balanced, random subsets of the dataset, and then reclassifies each training instance using a majority vote of these individual classifiers. However, the automatic correction may introduce new noise to the dataset by mistakenly changing a correct label to a wrong one. In many scenarios, it is worth the effort and cost to fix the labeling errors by human experts, in order to obtain a high quality dataset that can be reused by the community. Rebbapragada et al. (2012) propose a solution called Active Label Correction (ALC) which iteratively presents the experts with small sets of suspected mislabeled instances at each round. Our work employs a similar framework that uses active learning for data cleaning. In Active Learning (Settles, 2010) a small set of labeled data is used to find documents that should be annotated from a large pool of unlabeled documents. Many different strategies have been used to select the best points to annotate. These strategies can be generally divided into two groups: (1) selecting points in poorly sampled regions, and (2) selecting points that will have the greatest impact on models that were constructed using the dataset. Active learning for data cleaning differs from traditional active learning because the data already has low quality labels. It uses the difference between the low quality label for each data point and a prediction of the label using supervised machine learning models built upon the low quality labels. Unlike the work in (Rebbapragada et al., 2012), this paper focuses on developing algorithms that can enhance the ability of active learner on identifying labeling errors, which we consider as a key challenge of this approach but ALC has not addressed. 3 An Active Learning Framework for Label Correction Let ˆD = {(x1, y1), ..., (xn, yn)} be a dataset of binary labeled instances, where the instance xi belongs to domain X, and its label yi ∈{−1, +1}. ˆD contains an unknown number of mislabeled data points. The problem is to obtain a highquality dataset D by fixing labeling errors in ˆD, and learn an accurate classifier C from it. Algorithm 1 illustrates an active learning approach to the problem. This algorithm takes the noisy dataset ˆD as input. The training set T is initialized with the data in ˆD and then updated each round with new labels generated during reannotation. Data sets Sr and S are used to maintain the instances that have been selected for reannotation in the whole process and in the current iteration, respectively. Data: noisy data ˆD Result: cleaned data D, classifier C Initialize training set T = ˆD ; Initialize re-annotated data sets Sr = ∅; S = ∅; repeat Train classifier C using T ; Use C to select a set S of m suspected mislabeled instances from T ; Experts re-annotate the instances in S −(Sr ∩S) ; Update T with the new labels in S ; Sr = Sr ∪S; S = ∅; until for I iterations; D = T ; Algorithm 1: Active Learning Approach for Label Correction In each iteration, the algorithm trains classifiers using the training data in T. In practice, we apply k-fold cross-validation. We partition T into k subsets, and each time we keep a different subset as testing data and train a classifier using the other k −1 subsets of data. This process is repeated k times so that we get a classifier for each of the k subsets. The goal is to use the classifiers to efficiently and accurately seek out the most likely mislabeled instances from T for expert annotators to examine and re-annotate. When applying a classifier to classify the instances in the corresponding data subset, we get the probability about how likely one instance belongs to a class. The top m instances with the highest probabilities belonging to some class but conflicting preliminary labels are selected as the most likely errors for annotators to fix. During the re-annotation process we keep the old labels hidden to prevent that information from 1106 biasing annotators’ decisions. Similarly, we keep the probability scores hidden while annotating. This process is done with multiple iterations of training, sampling, and re-annotating. We maintain the re-annotated instances in Sr to avoid annotating the same instance multiple times. After each round of annotation, we compare the old labels to the new labels to measure the degree of impact this process is having on the dataset. We stop re-annotating on the Ith round after we decide that the reward for an additional round of annotation is too low to justify. 4 Feature Weighting Methods Building the classifier C that allows the most likely mislabeled instances to be selected and annotated is the essence of the active learning approach. There are two main goals of developing this classifier: (1) accurately predicting the labels of data points and ranking them based on prediction confidence, so that the most likely errors can be effectively identified; (2) requiring less time on training, so that the saved time can be spent on correcting more labeling errors. Thus we aim to build a classifier that is both accurate and time efficient. Labeling noise affects the classification accuracy. One possible reason is that some effective features that should be given high weights are inhibited in the training phase due to the labeling errors. For example, emoticon “:D” is a good indicator for emotion happy, however, if by mistake many instances containing this emoticon are not correctly labeled as happy, this class-specific feature would be underestimated during training. Following this idea, we develop computationally cheap feature weighting techniques to counteract such effect by boosting the weight of discriminative features, so that they would not be subdued and the instances with such features would have higher chance to be correctly classified. Specifically, we propose a non-linear distribution spreading algorithm for feature weighting. This algorithm first utilizes Delta IDF to weigh the features, and then non-linearly spreads out the distribution of features’ Delta IDF scores to exaggerate the weight of discriminative features. We first introduce Delta-IDF technique, and then describe our algorithm of distribution spreading. Since we focus on n-gram features, we use the words feature and term interchangeably in this paper. 4.1 Delta IDF Weighting Scheme Different from the commonly used TF (term frequency) or TF.IDF (term frequency.inverse document frequency) weighting schemes, Delta IDF treats the positive and negative training instances as two separate corpora, and weighs the terms by how biased they are to one corpus. The more biased a term is to one class, the higher (absolute value of) weight it will get. Delta IDF boosts the importance of terms that tend to be class-specific in the dataset, since they are usually effective features in distinguishing one class from another. Each training instance (e.g., a document) is represented as a feature vector: xi = (w1,i, ..., w|V |,i), where each dimension in the vector corresponds to a n-gram term in vocabulary V = {t1, ..., t|V |}, |V | is the number of unique terms, and wj,i(1 ≤j ≤|V |) is the weight of term tj in instance xi. Delta IDF (Martineau and Finin, 2009) assigns score ∆idfj to term tj in V as: ∆idfj = log (N + 1)(Pj + 1) (Nj + 1)(P + 1) (1) where P (or N) is the number of positively (or negatively) labeled training instances, Pj (or Nj) is the number of positively (or negatively) labeled training instances with term tj. Simple add-one smoothing is used to smooth low frequency terms and prevent dividing by zero when a term appears in only one corpus. We calculate the Delta IDF score of every term in V , and get the Delta IDF weight vector ∆= (∆idf1, ..., ∆idf|V |) for all terms. When the dataset is imblanced, to avoid building a biased model, we down sample the majority class before calculating the Delta IDF score and then use the a bias balancing procedure to balance the Delta IDF weight vector. This procedure first divides the Delta IDF weight vector to two vectors, one of which contains all the features with positive scores, and the other of which contains all the features with negative scores. It then applies L2 normalization to each of the two vectors, and add them together to create the final vector. For each instance, we can calculate the TF.Delta-IDF score as its weight: wj,i = tfj,i × ∆idfj (2) where tfj,i is the number of times term tj occurs in document xi, and ∆idfj is the Delta IDF score of tj. 1107 4.2 A Non-linear Distribution Spreading Algorithm Delta IDF technique boosts the weight of features with strong discriminative power. The model’s ability to discriminate at the feature level can be further enhanced by leveraging the distribution of feature weights across multiple classes, e.g., multiple emotion categories funny, happy, sad, exciting, boring, etc.. The distinction of multiple classes can be used to further force feature bias scores apart to improve the identification of classspecific features in the presence of labeling errors. Let L be a set of target classes, and |L| be the number of classes in L. For each class l ∈L, we create a binary labeled dataset ˆ Dl. Let V l be the vocabulary of dataset ˆ Dl, V be the vocabulary of all datasets, and |V | is the number of unique terms in V . Using Formula (1) and dataset ˆ Dl, we get the Delta IDF weight vector for each class l: ∆l = (∆idfl 1, ..., ∆idfl |V |). Note that ∆idfl j = 0 for any term tj ∈V −V l. For a class u, we calculate the spreading score spreadu j of each feature tj ∈V using a non-linear distribution spreading formula as following (where s is the configurable spread parameter): spreadu j = ∆idfu j × (3) P l∈L−u |∆idfu j −∆idfl j|s |L| −1 For any term tj ∈V , we can get its Delta IDF score on a class l. The distribution of Delta IDF scores of tj on all classes in L is represented as δj = {∆idf1 j , ..., ∆idf|L| j }. The mechanism of Formula (3) is to nonlinearly spread out the distribution, so that the importance of class-specific features can be further boosted to counteract the effect of noisy labels. Specifically, according to Formula (3), a high (absolute value of) spread score indicates that the Delta IDF score of that term on that class is high and deviates greatly from the scores on other classes. In other words, our algorithm assigns high spread score (absolute value) to a term on a class for which the term has strong discriminative power and very specific to that class compared with to other classes. When the dataset is imbalanced, we apply the similar bias balancing procedure as described in Section 4.1 to the spreading model. While these feature weighting models can be used to score and rank instances for data cleaning, better classification and regression models can be built by using the feature weights generated by these models as a pre-weight on the data points for other machine learning algorithms. 5 Experiments We conduct experiments on a Twitter dataset that contains tweets about TV shows and movies. The goal is to extract consumers’ emotional reactions to multimedia content, which has broad commercial applications including targeted advertising, intelligent search, and recommendation. To create the dataset, we collected 2 billion unique tweets using Twitter API queries for a list of known TV shows and movies on IMDB. Spam tweets were filtered out using a set of heuristics and manually crafted rules. From the set of 2 billion tweets we randomly selected a small subset of 100K tweets about the 60 most highly mentioned TV shows and movies in the dataset. Tweets were randomly sampled for each show using the round robin algorithm. Duplicates were not allowed. This samples an equal number of tweets for each show. We then sent these tweets to Amazon Mechanical Turk for annotation. We defined our own set of emotions to annotate. The widely accepted emotion taxonomies, including Ekmans Basic Emotions (Ekman, 1999), Russells Circumplex model (Russell and Barrett, 1999), and Plutchiks emotion wheel (Plutchik, 2001), did not fit well for TV shows and Movies. For example, the emotion expressed by laughter is a very important emotion for TV shows and movies, but this emotion is not covered by the taxonomies listed above. After browsing through the raw dataset, reviewing the literature on emotion analysis, and considering the TV and movie problem domain, we decided to focus on eight emotions: funny, happy, sad, exciting, boring, angry, fear, and heartwarming. Emotion annotation is a non-trivial task that is typically time-consuming, expensive and errorprone. This task is difficult because: (1) There are multiple emotions to annotate. In this work, we annotate eight different emotions. (2) Emotion expressions could be subtle and ambiguous and thus are easy to miss when labeling quickly. (3) The dataset is very imbalanced, which increases the problem of confirmation bias. As minority classes, emotional tweets can be easily missed because the last X tweets are all not emotional, and the annota1108 Funny Happy Sad Exciting Boring Angry Fear Heartwarming # Pos. 1,324 405 618 313 209 92 164 24 # Neg. 88,782 95,639 84,212 79,902 82,443 57,326 46,746 15,857 # Total 90,106 96,044 84,830 80,215 82,652 57,418 46,910 15,881 Table 1: Amazon Mechanical Turk annotation label counts. Funny Happy Sad Exciting Boring Angry Fear Heartwarming # Pos. 1,781 4,847 788 1,613 216 763 285 326 # Neg. 88,277 91,075 84,031 78,573 82,416 56,584 46,622 15,542 # Total1 90,058 95,922 84,819 80,186 82,632 57,347 46,907 15,868 Table 2: Ground truth annotation label counts for each emotion.2 tors do not expect the next one to be either. Due to these reasons, there is a lack of sufficient and high quality labeled data for emotion research. Some researchers have studied harnessing Twitter hashtags to automatically create an emotion annotated dataset (Wang et al., 2012). In order to evaluate our approach in real world scenarios, instead of creating a high quality annotated dataset and then introducing artificial noise, we followed the common practice of crowdsoucing, and collected emotion annotations through Amazon Mechanical Turk (AMT). This AMT annotated dataset was used as the low quality dataset ˆD in our evaluation. After that, the same dataset was annotated independently by a group of expert annotators to create the ground truth. We evaluate the proposed approach on two factors, the effectiveness of the models for emotion classification, and the improvement of annotation quality provided by the active learning procedure. We first describe the AMT annotation and ground truth annotation, and then discuss the baselines and experimental results. Amazon Mechanical Turk Annotation: we posted the set of 100K tweets to the workers on AMT for emotion annotation. We defined a set of annotation guidelines, which specified rules and examples to help annotators determine when to tag a tweet with an emotion. We applied substantial quality control to our AMT workers to improve the initial quality of annotation following the common practice of crowdsourcing. Each tweet was annotated by at least two workers. We used a series of tests to identify bad workers. These tests include (1) identifying workers with poor pairwise agreement, (2) identifying workers with poor performance on English language annotation, (3) identifying workers that were annotating at unrealistic speeds, (4) identifying workers with near random annotation distributions, and (5) identifying workers that annotate each tweet for a given TV show the same (or nearly the same) way. We manually inspected any worker with low performance on any of these tests before we made a final decision about using any of their annotations. For further quality control, we also gathered additional annotations from additional workers for tweets where only one out of two workers identified an emotion. After these quality control steps we defined minimum emotion annotation thresholds to determine and assign preliminary emotion labels to tweets. Note that some tweets were discarded as mixed examples for each emotion based upon thresholds for how many times they were tagged, and it resulted in different number of tweets in each emotion dataset. See Table 1 for the statistics of the annotations collected from AMT. Ground Truth Annotation: After we obtained the annotated dataset from AMT, we posted the same dataset (without the labels) to a group of expert annotators. The experts followed the same annotation guidelines, and each tweet was labeled by at least two experts. When there was a disagreement between two experts, they discussed to reach an agreement or gathered additional opinion from another expert to decide the label of a tweet. We used this annotated dataset as ground truth. See Table 2 for the statistics of the ground truth annotations. Compared with the ground truth, many emotion bearing tweets were missed by the AMT annotators, despite the quality control we applied. It demonstrates the challenge of annotation by crowdsourcing. The imbalanced class distribution 1The total number of tweets is lower than the AMT dataset because the experts removed some off-topic tweets. 2Expert annotators had a Kappa agreement score of 0.639 before meeting to resolve their differences. 1109 GG G G G G G G G G 0.36 0.40 0.44 0.48 0.52 0.56 0.60 0.64 0 4500 9000 13500 18000 22500 27000 Number of Instances Re−annotated Macro−averaged MAP Method G Spread SVM−TF SVM−Delta−IDF (a) Macro-Averaged MAP G G G G G G G G G G 0.28 0.33 0.38 0.43 0.48 0.53 0.58 0 4500 9000 13500 18000 22500 27000 Number of Instances Re−annotated Macro−averaged F1 Score Method G Spread SVM−TF SVM−Delta−IDF (b) Macro-Averaged F1 Score Figure 1: Performance comparison of mislabeled instance selection methods. Classifiers become more accurate as more instances are re-annotated. Spread achieves comparable performance with SVMs in terms of both MAP and F1 Score. aggravates the confirmation bias – the minority class examples are especially easy to miss when labeling quickly due to their rare presence in the dataset. Evaluation Metric: We evaluated the results with both Mean Average Precision (MAP) and F1 Score. Average Precision (AP) is the average of the algorithm’s precision at every position in the confidence ranked list of results where a true emotional document has been identified. Thus, AP places extra emphasis on getting the front of the list correct. MAP is the mean of the average precision scores for each ranked list. This is highly desirable for many practical application such as intelligent search, recommendation, and target advertising where users almost never see results that are not at the top of the list. F1 is a widely-used measure of classification accuracy. Methods: We evaluated the overall performance relative to the common SVM bag of words approach that can be ubiquitously found in text mining literature. We implemented the following four classification methods: • Delta-IDF: Takes the dot product of the Delta IDF weight vector (Formula 1) with the document’s term frequency vector. • Spread: Takes the dot product of the distribution spread weight vector (Formula 3) with the document’s term frequency vector. For all the experiments, we used spread parameter s = 2. • SVM-TF: Uses a bag of words SVM with term frequency weights. • SVM-Delta-IDF: Uses a bag of words SVM classification with TF.Delta-IDF weights (Formula 2) in the feature vectors before training or testing an SVM. We employed each method to build the active learner C described in Algorithm 1. We used standard bag of unigram and bigram words representation and topic-based fold cross validation. Since in real world applications people are primarily concerned with how well the algorithm will work for new TV shows or movies that may not be included in the training data, we defined a test fold for each TV show or movie in our labeled data set. Each test fold corresponded to a training fold containing all the labeled data from all the other TV shows and movies. We call it topic-based fold cross validation. We built the SVM classifiers using LIBLINEAR (Fan et al., 2008) and applied its L2-regularized support vector regression model. Based on the dot product or SVM regression scores, we ranked the tweets by how strongly they express the emotion. We selected the top m tweets with the highest dot product or regression scores but conflicting preliminary AMT labels as the suspected mislabeled instances for re-annotation, just as described in Algorithm 1. For the experimental purpose, the re-annotation was done by assigning the ground truth labels to the selected instances. Since the dataset is highly imbalanced, we applied the under-sampling strategy when training the classifiers. Figure 1 compares the performance of different approaches in each iteration after a certain number of potentially mislabeled instances are re1110 annotated. The X axis shows the total number of data points that have been examined for each emotion so far till the current iteration (i.e., 300, 900, 1800, 3000, 4500, 6900, 10500, 16500, and 26100). We reported both the macro-averaged MAP (Figure 1a) and the macro-averaged F1 Score (Figure 1b) on eight emotions as the overall performance of three competitive methods – Spread, SVM-Delta-IDF and SVM-TF. We have also conducted experiments using Delta-IDF, but its performance is low and not comparable with the other three methods. Generally, Figure 1 shows consistent performance gains as more labels are corrected during active learning. In comparison, SVM-Delta-IDF significantly outperforms SVM-TF with respect to both MAP and F1 Score. SVM-TF achieves higher MAP and F1 Score than Spread at the first few iterations, but then it is beat by Spread after 16,500 tweets had been selected and re-annotated till the eighth iteration. Overall, at the end of the active learning process, Spread outperforms SVMTF by 3.03% the MAP score (and by 4.29% the F1 score), and SVM-Delta-IDF outperforms SVMTF by 8.59% the MAP score (and by 5.26% the F1 score). Spread achieves a F1 Score of 58.84%, which is quite competitive compared to 59.82% achieved by SVM-Delta-IDF, though SVM-DeltaIDF outperforms Spread with respect to MAP. Spread and Delta-IDF are superior with respect to the time efficiency. Figure 2 shows the average training time of the four methods on eight emotions. The time spent training SVM-TF classifiers is twice that of SVM-Delta-IDF classifiers, 12 times that of Spread classifiers, and 31 times that of Delta-IDF classifiers. In our experiments, on average, it took 258.8 seconds to train a SVMTF classifier for one emotion. In comparison, the average training time of a Spread classifier was only 21.4 seconds, and it required almost no parameter tuning. In total, our method Spread saved up to (258.8 −21.4) ∗9 ∗8 = 17092.8 seconds (4.75 hours) over nine iterations of active learning for all the eight emotions. This is enough time to re-annotate thousands of data points. The other important quantity to measure is annotation quality. One measure of improvement for annotation quality is the number of mislabeled instances that can be fixed after a certain number of active learning iterations. Better methods can fix more labels with fewer iterations. 0 100 200 Delta−IDF Spread SVM−Delta−IDF SVM−TF Method Average Traming Time (s) Figure 2: Average training time on eight emotions. Spread requires only one-twelfth of the time spent to training an SVMTF classifier. Note that the time spent tuning the SVM’s parameters has not been included, but is considerable. Compared with such computationally expensive methods, Spread is more appropriate for use with active learning. G G GG G G G G G G 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% 0 4500 9000 13500 18000 22500 27000 Number of Instances Re−annotated Percentage of Fixed Labels Method G Spread SVM−TF SVM−Delta−IDF Delta−IDF Random Figure 3: Accumulated average percentage of fixed labels on eight emotions. Spreading the feature weights reduces the number of data points that must be examined in order to correct the mislabeled instances. SVMs require slightly fewer points but take far longer to build. Besides the four methods, we also implemented a random baseline (Random) which randomly selected the specified number of instances for reannotation in each round. We compared the improved dataset with the final ground truth at the end of each round to monitor the progress. Figure 3 reports the accumulated average percentage of corrected labels on all emotions in each iteration of the active learning process. According to the figure, SVM-Delta-IDF and SVM-TF are the most advantageous methods, followed by Spread and Delta-IDF. After the last iteration, SVM-Delta-IDF, SVM-TF, Spread and Delta-IDF has fixed 85.23%, 85.85%, 81.05% and 58.66% of the labels, respectively, all of which significantly outperform the Random baseline (29.74%). 1111 6 Conclusion In this paper, we explored an active learning approach to improve data annotation quality for classification tasks. Instead of training the active learner using computationally expensive techniques (e.g., SVM-TF), we used a novel non-linear distribution spreading algorithm. This algorithm first weighs the features using the Delta-IDF technique, and then non-linearly spreads out the distribution of the feature scores to enhance the model’s ability to discriminate at the feature level. The evaluation shows that our algorithm has the following advantages: (1) It intelligently ordered the data points for annotators to annotate the most likely errors first. The accuracy was at least comparable with computationally expensive baselines (e.g. SVM-TF). (2) The algorithm trained and ran much faster than SVM-TF, allowing annotators to finish more annotations than competitors. (3) The annotation process improved the dataset quality by positively impacting the accuracy of classifiers that were built upon it. References Carla E Brodley and Mark A Friedl. 1999. Identifying mislabeled training data. Journal of Artificial Intelligence Research, 11:131–167. Chris Callison-Burch. 2009. Fast, cheap, and creative: evaluating translation quality using amazon’s mechanical turk. In Proceedings of EMNLP, pages 286–295. ACL. Paul Ekman. 1999. Basic emotions. Handbook of cognition and emotion, 4:5–60. Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, XiangRui Wang, and Chih-Jen Lin. 2008. Liblinear: A library for large linear classification. The Journal of Machine Learning Research, 9:1871–1874. Donghai Guan, Weiwei Yuan, Young-Koo Lee, and Sungyoung Lee. 2011. Identifying mislabeled training data with the aid of unlabeled data. Applied Intelligence, 35(3):345–358. Wenxin Jiang. 2001. Some theoretical aspects of boosting in the presence of noisy data. In Proceedings of ICML. Citeseer. Adam Tauman Kalaia and Rocco A Servediob. 2005. Boosting in the presence of noise. Journal of Computer and System Sciences, 71:266–290. Amitava Karmaker and Stephen Kwek. 2006. A boosting approach to remove class label noise. International Journal of Hybrid Intelligent Systems, 3(3):169–177. Aniket Kittur, Ed H Chi, and Bongwon Suh. 2008. Crowdsourcing user studies with mechanical turk. In Proceedings of CHI, pages 453–456. ACM. Srivatsan Laxman, Sushil Mittal, and Ramarathnam Venkatesan. 2013. Error correction in learning using svms. arXiv preprint arXiv:1301.2012. Justin Martineau and Tim Finin. 2009. Delta tfidf: An improved feature space for sentiment analysis. In Proceedings of ICWSM. John Mingers. 1989. An empirical comparison of pruning methods for decision tree induction. Machine learning, 4(2):227–243. Fabrice Muhlenbach, St´ephane Lallich, and Djamel A Zighed. 2004. Identifying and handling mislabelled instances. Journal of Intelligent Information Systems, 22(1):89–109. Robert Plutchik. 2001. The nature of emotions. American Scientist, 89(4):344–350. Umaa Rebbapragada, Carla E Brodley, Damien SullaMenashe, and Mark A Friedl. 2012. Active label correction. In Proceedings of ICDM, pages 1080– 1085. IEEE. James A Russell and Lisa Feldman Barrett. 1999. Core affect, prototypical emotional episodes, and other things called emotion: dissecting the elephant. Journal of personality and social psychology, 76(5):805. Burr Settles. 2010. Active learning literature survey. Technical Report 1648, University of Wisconsin, Madison. Victor S Sheng, Foster Provost, and Panagiotis G Ipeirotis. 2008. Get another label? improving data quality and data mining using multiple, noisy labelers. In Proceedings of KDD, pages 614–622. ACM. Rion Snow, Brendan O’Connor, Daniel Jurafsky, and Andrew Y Ng. 2008. Cheap and fast—but is it good?: evaluating non-expert annotations for natural language tasks. In Proceedings of EMNLP, pages 254–263. Qi Su, Dmitry Pavlov, Jyh-Herng Chow, and Wendell C Baker. 2007. Internet-scale collection of humanreviewed data. In Proceedings of WWW, pages 231– 240. ACM. P Vannoorenberghe and T Denoeux. 2002. Handling uncertain labels in multiclass problems using belief decision trees. In Proceedings of IPMU, volume 3, pages 1919–1926. Sofie Verbaeten and Anneleen Van Assche. 2003. Ensemble methods for noise elimination in classification problems. In Multiple classifier systems, pages 317–325. Springer. Wenbo Wang, Lu Chen, Krishnaprasad Thirunarayan, and Amit P Sheth. 2012. Harnessing twitter” big data” for automatic emotion identification. In Proceedings of SocialCom, pages 587–592. IEEE. Xinchuan Zeng and Tony R Martinez. 2001. An algorithm for correcting mislabeled data. Intelligent data analysis, 5(6):491–502. Xingquan Zhu, Xindong Wu, and Qijun Chen. 2003. Eliminating class noise in large datasets. In Proceedings of ICML, pages 920–927. 1112
2014
104
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1113–1122, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Political Ideology Detection Using Recursive Neural Networks Mohit Iyyer1, Peter Enns2, Jordan Boyd-Graber3,4, Philip Resnik2,4 1Computer Science, 2Linguistics, 3iSchool, and 4UMIACS University of Maryland {miyyer,peter,jbg}@umiacs.umd.edu, [email protected] Abstract An individual’s words often reveal their political ideology. Existing automated techniques to identify ideology from text focus on bags of words or wordlists, ignoring syntax. Taking inspiration from recent work in sentiment analysis that successfully models the compositional aspect of language, we apply a recursive neural network (RNN) framework to the task of identifying the political position evinced by a sentence. To show the importance of modeling subsentential elements, we crowdsource political annotations at a phrase and sentence level. Our model outperforms existing models on our newly annotated dataset and an existing dataset. 1 Introduction Many of the issues discussed by politicians and the media are so nuanced that even word choice entails choosing an ideological position. For example, what liberals call the “estate tax” conservatives call the “death tax”; there are no ideologically neutral alternatives (Lakoff, 2002). While objectivity remains an important principle of journalistic professionalism, scholars and watchdog groups claim that the media are biased (Groseclose and Milyo, 2005; Gentzkow and Shapiro, 2010; Niven, 2003), backing up their assertions by publishing examples of obviously biased articles on their websites. Whether or not it reflects an underlying lack of objectivity, quantitative changes in the popular framing of an issue over time—favoring one ideologically-based position over another—can have a substantial effect on the evolution of policy (Dardis et al., 2008). Manually identifying ideological bias in political text, especially in the age of big data, is an impractical and expensive process. Moreover, bias They dubbed it the death tax “ ” and created a big lie about its adverse effects on small businesses Figure 1: An example of compositionality in ideological bias detection (red →conservative, blue → liberal, gray →neutral) in which modifier phrases and punctuation cause polarity switches at higher levels of the parse tree. may be localized to a small portion of a document, undetectable by coarse-grained methods. In this paper, we examine the problem of detecting ideological bias on the sentence level. We say a sentence contains ideological bias if its author’s political position (here liberal or conservative, in the sense of U.S. politics) is evident from the text. Ideological bias is difficult to detect, even for humans—the task relies not only on political knowledge but also on the annotator’s ability to pick up on subtle elements of language use. For example, the sentence in Figure 1 includes phrases typically associated with conservatives, such as “small businesses” and “death tax”. When we take more of the structure into account, however, we find that scare quotes and a negative propositional attitude (a lie about X) yield an evident liberal bias. Existing approaches toward bias detection have not gone far beyond “bag of words” classifiers, thus ignoring richer linguistic context of this kind and often operating at the level of whole documents. In contrast, recent work in sentiment analysis has used deep learning to discover compositional effects (Socher et al., 2011b; Socher et al., 2013b). Building from those insights, we introduce a recursive neural network (RNN) to detect ideological bias on the sentence level. This model requires 1113 wb = change wa = climate wd = so-called pc = climate change pe = so-called climate change xd= xc= xe= xa= xb= WL WR WR WL Figure 2: An example RNN for the phrase “socalled climate change”. Two d-dimensional word vectors (here, d = 6) are composed to generate a phrase vector of the same dimensionality, which can then be recursively used to generate vectors at higher-level nodes. richer data than currently available, so we develop a new political ideology dataset annotated at the phrase level. With this new dataset we show that RNNs not only label sentences well but also improve further when given additional phrase-level annotations. RNNs are quantitatively more effective than existing methods that use syntactic and semantic features separately, and we also illustrate how our model correctly identifies ideological bias in complex syntactic constructions. 2 Recursive Neural Networks Recursive neural networks (RNNs) are machine learning models that capture syntactic and semantic composition. They have achieved state-of-the-art performance on a variety of sentence-level NLP tasks, including sentiment analysis, paraphrase detection, and parsing (Socher et al., 2011a; Hermann and Blunsom, 2013). RNN models represent a shift from previous research on ideological bias detection in that they do not rely on hand-made lexicons, dictionaries, or rule sets. In this section, we describe a supervised RNN model for bias detection and highlight differences from previous work in training procedure and initialization. 2.1 Model Description By taking into account the hierarchical nature of language, RNNs can model semantic composition, which is the principle that a phrase’s meaning is a combination of the meaning of the words within that phrase and the syntax that combines those words. While semantic composition does not apply universally (e.g., sarcasm and idioms), most language follows this principle. Since most ideological bias becomes identifiable only at higher levels of sentence trees (as verified by our annotation, Figure 4), models relying primarily on wordlevel distributional statistics are not desirable for our problem. The basic idea behind the standard RNN model is that each word w in a sentence is associated with a vector representation xw ∈Rd. Based on a parse tree, these words form phrases p (Figure 2). Each of these phrases also has an associated vector xp ∈Rd of the same dimension as the word vectors. These phrase vectors should represent the meaning of the phrases composed of individual words. As phrases themselves merge into complete sentences, the underlying vector representation is trained to retain the sentence’s whole meaning. The challenge is to describe how vectors combine to form complete representations. If two words wa and wb merge to form phrase p, we posit that the phrase-level vector is xp = f(WL · xa + WR · xb + b1), (1) where WL and WR are d × d left and right composition matrices shared across all nodes in the tree, b1 is a bias term, and f is a nonlinear activation function such as tanh. The word-level vectors xa and xb come from a d × V dimensional word embedding matrix We, where V is the size of the vocabulary. We are interested in learning representations that can distinguish political polarities given labeled data. If an element of this vector space, xd, represents a sentence with liberal bias, its vector should be distinct from the vector xr of a conservativeleaning sentence. Supervised RNNs achieve this distinction by applying a regression that takes the node’s vector xp as input and produces a prediction ˆyp. This is a softmax layer ˆyd = softmax(Wcat · xp + b2), (2) where the softmax function is softmax(q) = exp q Pk j=1 exp qj (3) and Wcat is a k × d matrix for a dataset with kdimensional labels. We want the predictions of the softmax layer to match our annotated data; the discrepancy between categorical predictions and annotations is measured 1114 through the cross-entropy loss. We optimize the model parameters to minimize the cross-entropy loss over all sentences in the corpus. The crossentropy loss of a single sentence is the sum over the true labels yi in the sentence, ℓ(ˆys) = k X p=1 yp ∗log(ˆyp). (4) This induces a supervised objective function over all sentences: a regularized sum over all node losses normalized by the number of nodes N in the training set, C = 1 N N X i ℓ(predi) + λ 2 ∥θ∥2 . (5) We use L-BFGS with parameter averaging (Hashimoto et al., 2013) to optimize the model parameters θ = (WL, WR, Wcat, We, b1, b2). The gradient of the objective, shown in Eq. (6), is computed using backpropagation through structure (Goller and Kuchler, 1996), ∂C ∂θ = 1 N N X i ∂ℓ(ˆyi) ∂θ + λθ. (6) 2.2 Initialization When initializing our model, we have two choices: we can initialize all of our parameters randomly or provide the model some prior knowledge. As we see in Section 4, these choices have a significant effect on final performance. Random The most straightforward choice is to initialize the word embedding matrix We and composition matrices WL and WR randomly such that without any training, representations for words and phrases are arbitrarily projected into the vector space. word2vec The other alternative is to initialize the word embedding matrix We with values that reflect the meanings of the associated word types. This improves the performance of RNN models over random initializations (Collobert and Weston, 2008; Socher et al., 2011a). We initialize our model with 300-dimensional word2vec toolkit vectors generated by a continuous skip-gram model trained on around 100 billion words from the Google News corpus (Mikolov et al., 2013). The word2vec embeddings have linear relationships (e.g., the closest vectors to the average of “green” and “energy” include phrases such as “renewable energy”, “eco-friendly”, and “efficient lightbulbs”). To preserve these relationships as phrases are formed in our sentences, we initialize our left and right composition matrices such that parent vector p is computed by taking the average of children a and b (WL = WR = 0.5Id×d). This initialization of the composition matrices has previously been effective for parsing (Socher et al., 2013a). 3 Datasets We performed initial experiments on a dataset of Congressional debates that has annotations on the author level for partisanship, not ideology. While the two terms are highly correlated (e.g., a member of the Republican party likely agrees with conservative stances on most issues), they are not identical. For example, a moderate Republican might agree with the liberal position on increased gun control but take conservative positions on other issues. To avoid conflating partisanship and ideology we create a new dataset annotated for ideological bias on the sentence and phrase level. In this section we describe our initial dataset (Convote) and explain the procedure we followed for creating our new dataset (IBC).1 3.1 Convote The Convote dataset (Thomas et al., 2006) consists of US Congressional floor debate transcripts from 2005 in which all speakers have been labeled with their political party (Democrat, Republican, or independent). We propagate party labels down from the speaker to all of their individual sentences and map from party label to ideology label (Democrat →liberal, Republican →conservative). This is an expedient choice; in future work we plan to make use of work in political science characterizing candidates’ ideological positions empirically based on their behavior (Carroll et al., 2009). While the Convote dataset has seen widespread use for document-level political classification, we are unaware of similar efforts at the sentence level. 3.1.1 Biased Sentence Selection The strong correlation between US political parties and political ideologies (Democrats with liberal, Republicans with conservative) lends confidence that this dataset contains a rich mix of ideological 1Available at http://cs.umd.edu/˜miyyer/ibc 1115 statements. However, the raw Convote dataset contains a low percentage of sentences with explicit ideological bias.2 We therefore use the features in Yano et al. (2010), which correlate with political bias, to select sentences to annotate that have a higher likelihood of containing bias. Their features come from the Linguistic Inquiry and Word Count lexicon (LIWC) (Pennebaker et al., 2001), as well as from lists of “sticky bigrams” (Brown et al., 1992) strongly associated with one party or another (e.g., “illegal aliens” implies conservative, “universal healthcare” implies liberal). We first extract the subset of sentences that contains any words in the LIWC categories of Negative Emotion, Positive Emotion, Causation, Anger, and Kill verbs.3 After computing a list of the top 100 sticky bigrams for each category, ranked by loglikelihood ratio, and selecting another subset from the original data that included only sentences containing at least one sticky bigram, we take the union of the two subsets. Finally, we balance the resulting dataset so that it contains an equal number of sentences from Democrats and Republicans, leaving us with a total of 7,816 sentences. 3.2 Ideological Books In addition to Convote, we use the Ideological Books Corpus (IBC) developed by Gross et al. (2013). This is a collection of books and magazine articles written between 2008 and 2012 by authors with well-known political leanings. Each document in the IBC has been manually labeled with coarse-grained ideologies (right, left, and center) as well as fine-grained ideologies (e.g., religious-right, libertarian-right) by political science experts. There are over a million sentences in the IBC, most of which have no noticeable political bias. Therefore we use the filtering procedure outlined in Section 3.1.1 to obtain a subset of 55,932 sentences. Compared to our final Convote dataset, an even larger percentage of the IBC sentences exhibit no noticeable political bias.4 Because our goal is to distinguish between liberal and conservative 2Many sentences in Convote are variations on “I think this is a good/bad bill”, and there is also substantial parliamentary boilerplate language. 3While Kill verbs are not a category in LIWC, Yano et al. (2010) adopted it from Greene and Resnik (2009) and showed it to be a useful predictor of political bias. It includes words such as “slaughter” and “starve”. 4This difference can be mainly attributed to a historical topics in the IBC (e.g., the Crusades, American Civil War). In Convote, every sentence is part of a debate about 2005 political policy. bias, instead of the more general task of classifying sentences as “neutral” or “biased”, we filter the dataset further using DUALIST (Settles, 2011), an active learning tool, to reduce the proportion of neutral sentences in our dataset. To train the DUALIST classifier, we manually assigned class labels of “neutral” or “biased” to 200 sentences, and selected typical partisan unigrams to represent the “biased” class. DUALIST labels 11,555 sentences as politically biased, 5,434 of which come from conservative authors and 6,121 of which come from liberal authors. 3.2.1 Annotating the IBC For purposes of annotation, we define the task of political ideology detection as identifying, if possible, the political position of a given sentence’s author, where position is either liberal or conservative.5 We used the Crowdflower crowdsourcing platform (crowdflower.com), which has previously been used for subsentential sentiment annotation (Sayeed et al., 2012), to obtain human annotations of the filtered IBC dataset for political bias on both the sentence and phrase level. While members of the Crowdflower workforce are certainly not experts in political science, our simple task and the ubiquity of political bias allows us to acquire useful annotations. Crowdflower Task First, we parse the filtered IBC sentences using the Stanford constituency parser (Socher et al., 2013a). Because of the expense of labeling every node in a sentence, we only label one path in each sentence. The process for selecting paths is as follows: first, if any paths contain one of the top-ten partisan unigrams,6 we select the longest such path; otherwise, we select the path with the most open class constituencies (NP, VP, ADJP). The root node of a sentence is always included in a path. Our task is shown in Figure 3. Open class constituencies are revealed to the worker incrementally, starting with the NP, VP, or ADJP furthest from the root and progressing up the tree. We choose this design to prevent workers from changing their lower-level phrase annotations after reading the full sentence. 5This is a simplification, as the ideological hierarchy in IBC makes clear. 6The words that the multinomial na¨ıve Bayes classifier in DUALIST marked as highest probability given a polarity: market, abortion, economy, rich, liberal, tea, economic, taxes, gun, abortion 1116 Filtering the Workforce To ensure our annotators have a basic understanding of US politics, we restrict workers to US IP addresses and require workers manually annotate one node from 60 different “gold ” paths annotated by the authors. We select these nodes such that the associated phrase is either obviously biased or obviously neutral. Workers must correctly annotate at least six of eight gold paths before they are granted access to the full task. In addition, workers must maintain 75% accuracy on gold paths that randomly appear alongside normal paths. Gold paths dramatically improve the quality of our workforce: 60% of contributors passed the initial quiz (the 40% that failed were barred from working on the task), while only 10% of workers who passed the quiz were kicked out for mislabeling subsequent gold paths. Annotation Results Workers receive the following instructions: Each task on this page contains a set of phrases from a single sentence. For each phrase, decide whether or not the author favors a political position to the left (Liberal) or right (Conservative) of center. • If the phrase is indicative of a position to the left of center, please choose Liberal. • If the phrase is indicative of a position to the right of center, please choose Conservative. • If you feel like the phrase indicates some position to the left or right of the political center, but you’re not sure which direction, please mark Not neutral, but I’m unsure of which direction. • If the phrase is not indicative of a position to the left or right of center, please mark Neutral. We had workers annotate 7,000 randomly selected paths from the filtered IBC dataset, with half of the paths coming from conservative authors and the other half from liberal authors, as annotated by Gross et al. (2013). Three workers annotated each path in the dataset, and we paid $0.03 per sentence. Since identifying political bias is a relatively difficult and subjective task, we include all sentences where at least two workers agree on a label for the root node in our final dataset, except when that label is “Not neutral, but I’m unsure of Figure 3: Example political ideology annotation task showing incremental reveal of progressively longer phrases. which direction”. We only keep phrase-level annotations where at least two workers agree on the label: 70.4% of all annotated nodes fit this definition of agreement. All unannotated nodes receive the label of their closest annotated ancestor. Since the root of each sentence is always annotated, this strategy ensures that every node in the tree has a label. Our final balanced IBC dataset consists of 3,412 sentences (4,062 before balancing and removing neutral sentences) with a total of 13,640 annotated nodes. Of these sentences, 543 switch polarity (liberal →conservative or vice versa) on an annotated path. While we initially wanted to incorporate neutral labels into our model, we observed that lower-level phrases are almost always neutral while full sentences are much more likely to be biased (Figure 4). Due to this discrepancy, the objective function in Eq. (5) was minimized by making neutral predictions for almost every node in the dataset. 4 Experiments In this section we describe our experimental framework. We discuss strong baselines that use lexical and syntactic information (including framingspecific features from previous work) as well as multiple RNN configurations. Each of these models have the same task: to predict sentence-level ideology labels for sentences in a test set. To account for label imbalance, we subsample the data so that there are an equal number of labels and report accuracy over this balanced dataset. 1117 0 1 2 3 4 5 6 7 8 9 10 Node Depth 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Label Probability Label Probability vs. Node Depth Conservative Liberal Neutral / No Agreement Figure 4: Proportion of liberal, conservative, and neutral annotations with respect to node depth (distance from root). As we get farther from the root of the tree, nodes are more likely to be neutral. 4.1 Baselines • The RANDOM baseline chooses a label at random from {liberal, conservative}. • LR1, our most basic logistic regression baseline, uses only bag of words (BoW) features. • LR2 uses only BoW features. However, LR2 also includes phrase-level annotations as separate training instances.7 • LR3 uses BoW features as well as syntactic pseudo-word features from Greene & Resnik (2009). These features from dependency relations specify properties of verbs (e.g., transitivity or nominalization).8 • LR-(W2V) is a logistic regression model trained on the average of the pretrained word embeddings for each sentence (Section 2.2). The LR-(W2V) baseline allows us to compare against a strong lexical representation that encodes syntactic and semantic information without the RNN tree structure. (LR1, LR2) offer a comparison to simple bag of words models, while the LR3 baseline contrasts traditional syntactic features with those learned by RNN models. 4.2 RNN Models For RNN models, we generate a feature vector for every node in the tree. Equation 1 allows us to 7The Convote dataset was not annotated on the phrase level, so we only provide a result for the IBC dataset. 8We do not include phrase-level annotations in the LR3 feature set because the pseudo-word features can only be computed from full sentence parses. Model Convote IBC RANDOM 50% 50% LR1 64.7% 62.1% LR2 – 61.9% LR3 66.9% 62.6% LR-(W2V) 66.6% 63.7% RNN1 69.4% 66.2% RNN1-(W2V) 70.2% 67.1% RNN2-(W2V) – 69.3% Table 1: Sentence-level bias detection accuracy. The RNN framework, adding phrase-level data, and initializing with word2vec all improve performance over logistic regression baselines. The LR2 and RNN2-(W2V) models were not trained on Convote since it lacks phrase annotations. percolate the representations to the root of the tree. We generate the final instance representation by concatenating the root vector and the average of all other vectors (Socher et al., 2011b). We train an L2-regularized logistic regression model over these concatenated vectors to obtain final accuracy numbers on the sentence level. To analyze the effects of initialization and phrase-level annotations, we report results for three different RNN settings. All three models were implemented as described in Section 2 with the nonlinearity f set to the normalized tanh function, f(v) = tanh(v) ∥tanh(v)∥. (7) We perform 10-fold cross-validation on the training data to find the best RNN hyperparameters.9 We report results for RNN models with the following configurations: • RNN1 initializes all parameters randomly and uses only sentence-level labels for training. • RNN1-(W2V) uses the word2vec initialization described in Section 2.2 but is also trained on only sentence-level labels. • RNN2-(W2V) is initialized using word2vec embeddings and also includes annotated phrase labels in its training. For this model, we also introduce a hyperparameter β that weights the error at annotated nodes (1 −β) higher than the error at unannotated nodes (β); since we have more confidence in the annotated labels, we want them to contribute more towards the objective function. 9[λWe =1e-6, λW =1e-4, λWcat =1e-3, β = 0.3] 1118 For all RNN models, we set the word vector dimension d to 300 to facilitate direct comparison against the LR-(W2V) baseline.10 5 Where Compositionality Helps Detect Ideological Bias In this section, we examine the RNN models to see why they improve over our baselines. We also give examples of sentences that are correctly classified by our best RNN model but incorrectly classified by all of the baselines. Finally, we investigate sentence constructions that our model cannot handle and offer possible explanations for these errors. Experimental Results Table 1 shows the RNN models outperforming the bag-of-words baselines as well as the word2vec baseline on both datasets. The increased accuracy suggests that the trained RNNs are capable of detecting bias polarity switches at higher levels in parse trees. While phrase-level annotations do not improve baseline performance, the RNN model significantly benefits from these annotations because the phrases are themselves derived from nodes in the network structure. In particular, the phrase annotations allow our best model to detect bias accurately in complex sentences that the baseline models cannot handle. Initializing the RNN We matrix with word2vec embeddings improves accuracy over randomly initialization by 1%. This is similar to improvements from pretrained vectors from neural language models (Socher et al., 2011b). We obtain better results on Convote than on IBC with both bag-of-words and RNN models. This result was unexpected since the Convote labels are noisier than the annotated IBC labels; however, there are three possible explanations for the discrepancy. First, Convote has twice as many sentences as IBC, and the extra training data might help the model more than IBC’s better-quality labels. Second, since the sentences in Convote were originally spoken, they are almost half as short (21.3 words per sentence) as those in the IBC (42.2 words per sentence). Finally, some information is lost at every propagation step, so RNNs are able to model the shorter sentences in Convote more effectively than the longer IBC sentences. Qualitative Analysis As in previous work (Socher et al., 2011b), we visualize the learned 10Using smaller vector sizes (d ∈{50, 100}, as in previous work) does not significantly change accuracy. vector space by listing the most probable n-grams for each political affiliation in Table 2. As expected, conservatives emphasize values such as freedom and religion while disparaging excess government spending and their liberal opposition. Meanwhile, liberals inveigh against the gap between the rich and the poor while expressing concern for minority groups and the working class. Our best model is able to accurately model the compositional effects of bias in sentences with complex syntactic structures. The first three sentences in Figure 5 were correctly classified by our best model (RNN2-(W2V)) and incorrectly classified by all of the baselines. Figures 5A and C show traditional conservative phrases, “free market ideology” and “huge amounts of taxpayer money”, that switch polarities higher up in the tree when combined with phrases such as “made worse by” and “saved by”. Figure 5B shows an example of a bias polarity switch in the opposite direction: the sentence negatively portrays supporters of nationalized health care, which our model picks up on. Our model often makes errors when polarity switches occur at nodes that are high up in the tree. In Figure 5D, “be used as an instrument to achieve charitable or social ends” reflects a liberal ideology, which the model predicts correctly. However, our model is unable to detect the polarity switch when this phrase is negated with “should not”. Since many different issues are discussed in the IBC, it is likely that our dataset has too few examples of some of these issues for the model to adequately learn the appropriate ideological positions, and more training data would resolve many of these errors. 6 Related Work A growing NLP subfield detects private states such as opinions, sentiment, and beliefs (Wilson et al., 2005; Pang and Lee, 2008) from text. In general, work in this category tends to combine traditional surface lexical modeling (e.g., bag-of-words) with hand-designed syntactic features or lexicons. Here we review the most salient literature related to the present paper. 6.1 Automatic Ideology Detection Most previous work on ideology detection ignores the syntactic structure of the language in use in favor of familiar bag-of-words representations for 1119 be used as an instrument to achieve charitable or social ends should not the law X X X nationalized health care An entertainer once said a sucker is born every minute , and surely this is the case with those who support made worse by the implementing Thus , the harsh conditions for farmers caused by a number of factors , , have created a continuing stream of people leaving the countryside and going to live in cities that do not have jobs for them . of free-market ideology huge amounts of taxpayer money saved by But taxpayers do know already that TARP was designed in a way that allowed to continue to show the same arrogant traits that should have destroyed their companies . the same corporations who were A B C D Figure 5: Predictions by RNN2-(W2V) on four sentences from the IBC. Node color is the true label (red for conservative, blue for liberal), and an “X” next to a node means the model’s prediction was wrong. In A and C, the model accurately detects conservative-to-liberal polarity switches, while in B it correctly predicts the liberal-to-conservative switch. In D, negation confuses our model. the sake of simplicity. For example, Gentzkow and Shapiro (2010) derive a “slant index” to rate the ideological leaning of newspapers. A newspaper’s slant index is governed by the frequency of use of partisan collocations of 2-3 tokens. Similarly, authors have relied on simple models of language when leveraging inferred ideological positions. E.g., Gerrish and Blei (2011) predict the voting patterns of Congress members based on bagof-words representations of bills and inferred political leanings of those members. Recently, Sim et al. (2013) have proposed a model to infer mixtures of ideological positions in documents, applied to understanding the evolution of ideological rhetoric used by political candidates during the campaign cycle. They use an HMM-based model, defining the states as a set of fine-grained political ideologies, and rely on a closed set of lexical bigram features associated with each ideology, inferred from a manually labeled ideological books corpus. Although it takes elements of discourse structure into account (capturing the“burstiness” of ideological terminology usage), their model explicitly ignores intrasentential contextual influences of the kind seen in Figure 1. Other approaches on the document level use topic models to analyze bias in news articles, blogs, and political speeches (Ahmed and Xing, 2010; Lin et al., 2008; Nguyen et al., 2013). 6.2 Subjectivity Detection Detecting subjective language, which conveys opinion or speculation, is a related NLP problem. While sentences lacking subjective language may contain ideological bias (e.g., the topic of the sentence), highly-opinionated sentences likely have obvious ideological leanings. In addition, sentiment and subjectivity analysis offers methodological approaches that can be applied to automatic bias detection. Wiebe et al. (2004) show that low-frequency words and some collocations are a good indicators of subjectivity. More recently, Recasens et al. (2013) detect biased words in sentences using indicator features for bias cues such as hedges and factive verbs in addition to standard bag-of-words and part-of-speech features. They show that this type of linguistic information dramatically improves performance over several standard baselines. Greene and Resnik (2009) also emphasize the connection between syntactic and semantic relationships in their work on “implicit sentiment”, 1120 n Most conservative n-grams Most liberal n-grams 1 Salt, Mexico, housework, speculated, consensus, lawyer, pharmaceuticals, ruthless, deadly, Clinton, redistribution rich, antipsychotic, malaria, biodiversity, richest, gene, pesticides, desertification, Net, wealthiest, labor, fertilizer, nuclear, HIV 3 prize individual liberty, original liberal idiots, stock market crash, God gives freedom, federal government interference, federal oppression nullification, respect individual liberty, Tea Party patriots, radical Sunni Islamists, Obama stimulus programs rich and poor,“corporate greed”, super rich pay, carrying the rich, corporate interest groups, young women workers, the very rich, for the rich, by the rich, soaking the rich, getting rich often, great and rich, the working poor, corporate income tax, the poor migrants 5 spending on popular government programs, bailouts and unfunded government promises, North America from external threats, government regulations place on businesses, strong Church of Christ convictions, radical Islamism and other threats the rich are really rich, effective forms of worker participation, the pensions of the poor, tax cuts for the rich, the ecological services of biodiversity, poor children and pregnant women, vacation time for overtime pay 7 government intervention helped make the Depression Great, by God in His image and likeness, producing wealth instead of stunting capital creation, the traditional American values of limited government, trillions of dollars to overseas oil producers, its troubled assets to federal sugar daddies, Obama and his party as racialist fanatics African Americans and other disproportionately poor groups; the growing gap between rich and poor; the Bush tax cuts for the rich; public outrage at corporate and societal greed; sexually transmitted diseases , most notably AIDS; organize unions or fight for better conditions, the biggest hope for health care reform Table 2: Highest probability n-grams for conservative and liberal ideologies, as predicted by the RNN2(W2V) model. which refers to sentiment carried by sentence structure and not word choice. They use syntactic dependency relation features combined with lexical information to achieve then state-of-the-art performance on standard sentiment analysis datasets. However, these syntactic features are only computed for a thresholded list of domain-specific verbs. This work extends their insight of modeling sentiment as an interaction between syntax and semantics to ideological bias. Future Work There are a few obvious directions in which this work can be expanded. First, we can consider more nuanced political ideologies beyond liberal and conservative. We show that it is possible to detect ideological bias given this binary problem; however, a finer-grained study that also includes neutral annotations may reveal more subtle distinctions between ideologies. While acquiring data with obscure political biases from the IBC or Convote is unfeasible, we can apply a similar analysis to social media (e.g., Twitter or Facebook updates) to discover how many different ideologies propagate in these networks. Another direction is to implement more sophisticated RNN models (along with more training data) for bias detection. We attempted to apply syntactically-untied RNNs (Socher et al., 2013a) to our data with the idea that associating separate matrices for phrasal categories would improve representations at high-level nodes. While there were too many parameters for this model to work well here, other variations might prove successful, especially with more data. Finally, combining sentencelevel and document-level models might improve bias detection at both levels. 7 Conclusion In this paper we apply recursive neural networks to political ideology detection, a problem where previous work relies heavily on bag-of-words models and hand-designed lexica. We show that our approach detects bias more accurately than existing methods on two different datasets. In addition, we describe an approach to crowdsourcing ideological bias annotations. We use this approach to create a new dataset from the IBC, which is labeled at both the sentence and phrase level. Acknowledgments We thank the anonymous reviewers, Hal Daum´e, Yuening Hu, Yasuhiro Takayama, and Jyothi Vinjumur for their insightful comments. We also want to thank Justin Gross for providing the IBC and Asad Sayeed for help with the Crowdflower task design, as well as Richard Socher and Karl Moritz Hermann for assisting us with our model implementations. This work was supported by NSF Grant CCF-1018625. Boyd-Graber is also supported by NSF Grant IIS-1320538. Any opinions, findings, conclusions, or recommendations expressed here are those of the authors and do not necessarily reflect the view of the sponsor. 1121 References Amr Ahmed and Eric P Xing. 2010. Staying informed: supervised and semi-supervised multi-view topical analysis of ideological perspective. In EMNLP. Peter F Brown, Peter V Desouza, Robert L Mercer, Vincent J Della Pietra, and Jenifer C Lai. 1992. Class-based n-gram models of natural language. Comp. Ling., 18(4):467–479. Royce Carroll, Jeffrey B Lewis, James Lo, Keith T Poole, and Howard Rosenthal. 2009. Measuring bias and uncertainty in dw-nominate ideal point estimates via the parametric bootstrap. Political Analysis, 17(3):261–275. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In ICML. Frank E Dardis, Frank R Baumgartner, Amber E Boydstun, Suzanna De Boef, and Fuyuan Shen. 2008. Media framing of capital punishment and its impact on individuals’ cognitive responses. Mass Communication & Society, 11(2):115– 140. Matthew Gentzkow and Jesse M Shapiro. 2010. What drives media slant? evidence from us daily newspapers. Econometrica, 78(1):35–71. Sean Gerrish and David M Blei. 2011. Predicting legislative roll calls from text. In ICML. Christoph Goller and Andreas Kuchler. 1996. Learning taskdependent distributed representations by backpropagation through structure. In Neural Networks, 1996., IEEE International Conference on, volume 1. Stephan Greene and Philip Resnik. 2009. More than words: Syntactic packaging and implicit sentiment. In NAACL. Tim Groseclose and Jeffrey Milyo. 2005. A measure of media bias. The Quarterly Journal of Economics, 120(4):1191– 1237. Justin Gross, Brice Acree, Yanchuan Sim, and Noah A Smith. 2013. Testing the etch-a-sketch hypothesis: A computational analysis of mitt romney’s ideological makeover during the 2012 primary vs. general elections. In APSA 2013 Annual Meeting Paper. Kazuma Hashimoto, Makoto Miwa, Yoshimasa Tsuruoka, and Takashi Chikayama. 2013. Simple customization of recursive neural networks for semantic relation classification. In EMNLP. Karl Moritz Hermann and Phil Blunsom. 2013. The Role of Syntax in Vector Space Models of Compositional Semantics. In ACL. George Lakoff. 2002. Moral Politics: How Liberals and Conservatives Think, Second Edition. University of Chicago Press. Wei-Hao Lin, Eric Xing, and Alexander Hauptmann. 2008. A joint topic and perspective model for ideological discourse. In Machine Learning and Knowledge Discovery in Databases, pages 17–32. Springer. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Viet-An Nguyen, Jordan Boyd-Graber, and Philip Resnik. 2013. Lexical and hierarchical topic regression. In NIPS, pages 1106–1114. David Niven. 2003. Objective evidence on media bias: Newspaper coverage of congressional party switchers. Journalism & Mass Communication Quarterly, 80(2):311–326. Bo Pang and Lillian Lee. 2008. Opinion mining and sentiment analysis. Foundations and trends in information retrieval, 2(1-2). James W. Pennebaker, Martha E. Francis, and Roger J. Booth. 2001. Linguistic inquiry and word count [computer software]. Mahwah, NJ: Erlbaum Publishers. Marta Recasens, Cristian Danescu-Niculescu-Mizil, and Dan Jurafsky. 2013. Linguistic models for analyzing and detecting biased language. Asad B Sayeed, Jordan Boyd-Graber, Bryan Rusk, and Amy Weinberg. 2012. Grammatical structures for word-level sentiment detection. In NAACL. Burr Settles. 2011. Closing the loop: Fast, interactive semi-supervised annotation with queries on features and instances. In EMNLP. Yanchuan Sim, Brice Acree, Justin H Gross, and Noah A Smith. 2013. Measuring ideological proportions in political speeches. In EMNLP. Richard Socher, Eric H. Huang, Jeffrey Pennington, Andrew Y. Ng, and Christopher D. Manning. 2011a. Dynamic Pooling and Unfolding Recursive Autoencoders for Paraphrase Detection. In NIPS. Richard Socher, Jeffrey Pennington, Eric H. Huang, Andrew Y. Ng, and Christopher D. Manning. 2011b. Semi-Supervised Recursive Autoencoders for Predicting Sentiment Distributions. In EMNLP. Richard Socher, John Bauer, Christopher D. Manning, and Andrew Y. Ng. 2013a. Parsing With Compositional Vector Grammars. In ACL. Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013b. Recursive deep models for semantic compositionality over a sentiment treebank. In EMNLP. Matt Thomas, Bo Pang, and Lillian Lee. 2006. Get out the vote: Determining support or opposition from Congressional floor-debate transcripts. In EMNLP. Janyce Wiebe, Theresa Wilson, Rebecca Bruce, Matthew Bell, and Melanie Martin. 2004. Learning subjective language. Comp. Ling., 30(3):277–308. Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing contextual polarity in phrase-level sentiment analysis. In EMNLP. Tae Yano, Philip Resnik, and Noah A Smith. 2010. Shedding (a thousand points of) light on biased language. In Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk, pages 152–158. 1122
2014
105
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1123–1133, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics A Unified Model for Soft Linguistic Reordering Constraints in Statistical Machine Translation Junhui Li† Yuval Marton‡ Philip Resnik† Hal Daum´e III† †UMIACS, University of Maryland, College Park, MD {lijunhui, resnik, hal}@umiacs.umd.edu ‡Microsoft Corp., City Center Plaza, Bellevue, WA [email protected] Abstract This paper explores a simple and effective unified framework for incorporating soft linguistic reordering constraints into a hierarchical phrase-based translation system: 1) a syntactic reordering model that explores reorderings for context free grammar rules; and 2) a semantic reordering model that focuses on the reordering of predicate-argument structures. We develop novel features based on both models and use them as soft constraints to guide the translation process. Experiments on Chinese-English translation show that the reordering approach can significantly improve a state-of-the-art hierarchical phrase-based translation system. However, the gain achieved by the semantic reordering model is limited in the presence of the syntactic reordering model, and we therefore provide a detailed analysis of the behavior differences between the two. 1 Introduction Reordering models in statistical machine translation (SMT) model the word order difference when translating from one language to another. The popular distortion or lexicalized reordering models in phrase-based SMT make good local predictions by focusing on reordering on word level, while the synchronous context free grammars in hierarchical phrase-based (HPB) translation models are capable of handling non-local reordering on the translation phrase level. However, reordering, especially without any help of external knowledge, remains a great challenge because an accurate reordering is usually beyond these word level or translation phrase level reordering models’ ability. In addition, often these translation models fail to respect linguistically-motivated syntax and semantics. As a result, they tend to produce translations containing both syntactic and semantic reordering confusions. In this paper our goal is to take advantage of syntactic and semantic parsing to improve translation quality. Rather than introducing reordering models on either the word level or the translation phrase level, we propose a unified approach to modeling reordering on the linguistic unit level, e.g., syntactic constituents and semantic roles. The reordering unit falls into multiple granularities, from single words to more complex constituents and semantic roles, and often crosses translation phrases. To show the effectiveness of our reordering models, we integrate both syntactic constituent reordering models and semantic role reordering models into a state-ofthe-art HPB system (Chiang, 2007; Dyer et al., 2010). We further contrast it with a stronger baseline, already including fine-grained soft syntactic constraint features (Marton and Resnik, 2008; Chiang et al., 2008). The general ideas, however, are applicable to other translation models, e.g., phrase-based model, as well. Our syntactic constituent reordering model considers context free grammar (CFG) rules in the source language and predicts the reordering of their elements on the target side, using word alignment information. Due to the fact that a constituent, especially a long one, usually maps into multiple discontinuous blocks in the target language, there is more than one way to describe the monotonicity or swapping patterns; we therefore design two reordering models: one is based on the leftmost aligned target word and the other based on the rightmost target word. While recently there has also been some encouraging work on incorporating semantic structure (or, more specifically, predicate-argument structure: PAS) reordering in SMT, it is still an open question whether semantic structure reordering 1123 strongly overlaps with syntactic structure reordering, since the semantic structure is closely tied to syntax. To this end, we employ the same reordering framework as syntactic constituent reordering and focus on semantic roles in a PAS. We then analyze the differences between the syntactic and semantic features. The contributions of this paper include the following: • We introduce novel soft reordering constraints, using syntactic constituents or semantic roles, composed over word alignment information in translation rules used during decoding time; • We introduce a unified framework to incorporate syntactic and semantic reordering constraints; • We provide a detailed analysis providing insight into why the semantic reordering model is significantly less effective when syntactic reordering features are also present. The rest of the paper is organized as follows. Section 2 provides an overview of HPB translation model. Section 3 describes the details of our unified reordering models. Section 4 gives our experimental results and Section 5 discusses the behavior difference between syntactic constituent reordering and semantic role reordering. Section 6 reviews related work and, finally Section 7 concludes the paper. 2 HPB Translation Model: an Overview In HPB models (Chiang, 2007), synchronous rules take the form X →⟨γ, α, ∼⟩, where X is the nonterminal symbol, γ and α are strings of lexical items and non-terminals in the source and target side, respectively, and ∼indicates the one-to-one correspondence between non-terminals in γ and α. Each such rule is associated with a set of translation model features {φi}, such as phrase translation probability p (α | γ) and its inverse p (γ | α), the lexical translation probability plex (α | γ) and its inverse plex (γ | α), and a rule penalty that affects preference for longer or shorter derivations. Two other widely used features are a target language model feature and a target word penalty. Given a derivation d, its translation logprobability is estimated as: log P (d) ∝ X i λiφi (d) (1)   PAS   A0   (NP)   TMP   (NP)   Pre   (VBD)   A1   (NP)   Applicants            yesterday            filled            the  forms   Figure 1: Example of predicate-argument structure. where λi is the corresponding weight of feature φi. See (Chiang, 2007) for more details. 3 Unified Linguistic Reordering Models As mentioned earlier, the linguistic reordering unit is the syntactic constituent for syntactic reordering, and the semantic role for semantic reordering. The syntactic reordering model takes a CFG rule (e.g., VP →VP PP PP) and models the reordering of the constituents on the left hand side by examining their translation or visit order according to the target language. For the semantic reordering model, it takes a PAS and models its reordering on the target side. Figure 1 shows an example of a PAS where the predicate (Pre) has two core arguments (A0 and A1) and one adjunct (TMP). Note that we refer all core arguments, adjuncts, and predicates as semantic roles; thus we say the PAS in Figure 1 has 4 roles. According to the annotation principles in (Chinese) PropBank (Palmer et al., 2005; Xue and Palmer, 2009), all the roles in a PAS map to a corresponding constituent in the parse tree, and these constituents (e.g., NPs and VBD in Figure 1) do not overlap with each other. Next, we use a CFG rule to describe our syntactic reordering model. Treating the two forms of reorderings in a unified way, the semantic reordering model is obtainable by regarding a PAS as a CFG rule and considering a semantic role as a constituent. Because the translation of a source constituent might result in multiple discontinuous blocks, there can be several ways to describe or group the reordering patterns. Therefore, we design two general constituent reordering sub-models. One is based on the leftmost aligned word (leftmost reordering model) and the other is based on the rightmost aligned word (rightmost reordering model), as follows. Figure 2 shows the modeling steps for the leftmost reordering model. Figure 2(a) is an example of a CFG rule in the source 1124   XP   XP1   XP2   XP3   XP4   f3    f4   f5     f6    f7   f8   ...   ...   ...   ...   …      e2          e3        e4        e5        e6        e7        e8        e9    …   XP1   XP2   XP3   XP4   e2   e3   e5   (a)  a  CFG  rule  and  its  alignment   (b)  leftmost  aligned  target  words   XP1   XP2   XP3   XP4   1   4   2   3   XP1   XP2   XP3   XP4   DM   DS   M   (c)  visit  order   (d)  reordering  types   Figure 2: Modeling process illustration for leftmost reordering model. parse tree and its word alignment links to the target language. Note that constituent XP4, which covers word f8, has no alignment. Then for each XPi, we find the leftmost target word which is aligned to a source word covered by XPi. Figure 2(b) shows that the leftmost target words for XP1, XP2, and XP3 are e2, e5, and e3, respectively, while XP4 has no aligned target word. Then we get visit order V = {vi} for {XPi} in the transformation from Figure 2(b) to Figure 2(c), with the following strategies for special cases: • if the first constituent XP1 is unaligned, we add a NULL word at the beginning of the target side and link XP1 to the NULL word; • if a constituent XPi (i > 1) is unaligned, we add a link to the target word which is aligned to XPi−1, e.g., XP4 will be linked to e3; and • if k constituents XPm1 . . . XPmk (m1 < . . . < mk) are linked to the same target word, then vmi = vmi+1 −1, e.g., since XP3 and XP4 are both linked to e3, then v3 = v4 −1. Finally Figure 2(d) converts the visit order V = {v1, . . . vn} into a sequence of leftmost reordering types LRT = {lrt1, . . . , lrtn−1}. For every two adjacent constituents XPi and XPi+1 with corresponding visit order vi and vi+1, their reordering could be one of the following: • Monotone (M) if vi+1 = vi + 1; • Discontinuous Monotone (DM) if vi+1 > vi + 1; • Swap (S) if vi+1 = vi −1; • Discontinuous Swap (DS) if vi+1 < vi −1. Up to this point, we have generated a sequence of leftmost reordering types LRT = {lrt1, . . . , lrtn−1} for a given CFG rule cfg: XP →XP1 . . . XPn. The leftmost reordering model takes the following form: scorelrt (cfg) = Pl (lrt1, . . . , lrtn−1 | ψ (cfg)) (2) where ψ (cfg) indicates the surrounding context of the CFG. By assuming that any two reordering types in LRT = {lrt1, . . . , lrtn−1} are independent of each other, we reformulate Eq. 2 into: scorelrt (cfg) = n−1 Y i=1 Pl (lrti | ψ (cfg)) (3) Similarly, the sequence of rightmost reordering types RRT can be decided for a CFG rule XP → XP1 . . . XPn. Accordingly, for a PAS pas: PAS →R1 . . . Rn, we can obtain its sequences of leftmost and rightmost reordering types by using the same way described above. 3.1 Probability Estimation In order to predict either the leftmost or rightmost reordering type for two adjacent constituents, we use a maximum entropy classifier to estimate the probability of the reordering type rt ∈ {M, DM, S, DS} as follows: P (rt | ψ (cfg)) = exp (P k θkfk (rt, ψ (cfg))) P rt′ exp (P k θkfi (rt′, ψ (cfg))) (4) where fk are binary features, θk are the weights of these features. Most of our features fk are syntaxbased. For XPi and XPi+1 in cfg, the features 1125 #Index Feature cf1 L(XPi) & L(XPi+1) & L(XP) cf2 for each XPj (j < i) L(XPi) & L(XPi+1) & L(XP) & L(XPj) cf3 for each XPj (j > i + 1) L(XPi) & L(XPi+1) & L(XP) & L(XPj) cf4 L(XPi) & L(XPi+1) & P(XPi) cf5 L(XPi) & L(XPi+1) & H(XPi) cf6 L(XPi) & L(XPi+1) & P(XPi+1) cf7 L(XPi) & L(XPi+1) & H(XPi+1) cf8 L(XPi) & L(XPi+1) & S(XPi) cf9 L(XPi) & L(XPi+1) & S(XPi+1) cf10 L(XPi) & L(XP) cf11 L(XPi+1) & L(XP) Table 1: Features adopted in the syntactic leftmost and rightmost reordering models. L (XP) returns the syntactic category of XP, e.g., NP, VP, PP etc.; H (XP) returns the head word of XP; P (XP) returns the POS tagger of the head word; S (XP) returns the translation status of XP on the target language: un. if it is untranslated; cont. if it is a continuous block; and discont. if it maps into multiple discontinuous blocks. are aimed to examine which of them should be translated first. Therefore, most features share two common components: the syntactic categories of XPi and XPi+1. Table 1 shows the features used in syntactic leftmost and rightmost reordering models. Note that we use the same features for both. Although the semantic reordering model is structured in precisely the same way, we use different feature sets to predict the reordering between two semantic roles. Given the two adjacent roles Ri and Ri+1 in a PAS pas, Table 2 shows the features that are used in the semantic leftmost and rightmost reordering models. 3.2 Integrating into the HPB Model For models with syntactic reordering, we add two new features (i.e., one for the leftmost reordering model and the other for the rightmost reordering model) into the log-linear translation model in Eq. 1. Unlike the conventional phrase and lexical translation features, whose values are phrase pair-determined and thus can be calculated offline, the value of the reordering features can only be obtained during decoding time, and requires word alignment information as well. Before we present the algorithm integrating the reordering models, we define the following functions by assuming XPi and XPi+1 are the constituent pair of interest in CFG rule cfg, H is the translation hypothesis and a is its word alignment: #Index Feature rf1 R(Ri) & R(Ri+1) & P(pas) R(Ri) & R(Ri+1) rf2 for each Rj (j < i) R(Ri) & R(Ri+1) & R(Rj) & P(pas) R(Ri) & R(Ri+1) & R(Rj) rf3 for each Rj (j > i + 1) R(Ri) & R(Ri+1) & R(Rj) & P(pas) R(Ri) & R(Ri+1) & R(Rj) rf4 R(Ri) & R(Ri+1) & P(Ri) rf5 R(Ri) & R(Ri+1) & H(Ri) rf6 R(Ri) & R(Ri+1) & L(Ri) rf7 R(Ri) & R(Ri+1) & P(Ri+1) rf8 R(Ri) & R(Ri+1) & H(Ri+1) rf9 R(Ri) & R(Ri+1) & L(Ri+1) rf10 R(Ri) & R(Ri+1) & S(Ri) rf11 R(Ri) & R(Ri+1) & S(Ri+1) rf12 R(Ri) & P(pas) R(Ri) rf13 R(Ri+1) & P(pas) R(Ri+1) Table 2: Features adopted in the semantic leftmost and rightmost reordering models. P (pas) returns the predicate content of pas; R (R) returns the role type of R, e.g., Pred, A0, TMP, etc. For features rf1, rf2, rf3, rf12 and rf13, we include another version which excludes the predicate content P(pas) for reasons of sparsity. • F1 (w1, w2, XP): returns true if constituent XP is within the span from word w1 to w2; otherwise returns false. • F2 (H, cfg, XPi, XPi+1) returns true if the reordering of the pair ⟨XPi, XPi+1⟩in rule cfg has not been calculated yet; otherwise returns false. • F3 (H, a, XPi, XPi+1) returns the leftmost and rightmost reordering types for the constituent pair ⟨XPi, XPi+1⟩, given alignment a, according to Section 3. • F4 (rt, cfg, XPi, XPi+1) returns the probability of leftmost reordering type rt for the constituent pair ⟨XPi, XPi+1⟩in rule cfg. • F5 (rt, cfg, XPi, XPi+1) returns the probability of rightmost reordering type rt for the constituent pair ⟨XPi, XPi+1⟩in rule cfg. Algorithm 1 integrates the syntactic leftmost and rightmost reordering models into a CKY-style decoder whenever a new hypothesis is generated. Given a hypothesis H with its alignment a, it traverses all CFG rules in the parse tree and sees if two adjacent constituents are conditioned to trigger the reordering models (lines 2-4). For each pair of constituents, it first extracts its leftmost and rightmost reordering types (line 6) and then gets their respective probabilities returned by the maximum entropy classifiers defined in Section 3.1 1126 Algorithm 1: Integrating the syntactic reordering models into a CKY-style decoder Input: Sentence f in the source language Parse tree t of f All CFG rules {cfg} in t Hypothesis H spanning from word w1 to w2 Alignment a of H Output: Log-Probabilities of the syntactic leftmost and rightmost reordering models 1. set l prob = rprob = 0.0 2. foreach cfg in {cfg} 3. foreach pair XPi and XPi+1 in cfg 4. if F1 (w1, w2, XPi) = false or F1 (w1, w2, XPi+1) = false or F2 (H, cfg, XPi, XPi+1) = false 5. continue 6. (l type, r type) = F3 (H, a, XPi, XPi+1) 7. l prob += log F4 (l type, cfg, XPi, XPi+1) 8. r prob += log F5 (r type, cfg, XPi, XPi+1) 9. return (l prob, r prob) (lines 7-8). Then the algorithm returns two logprobabilities of the syntactic reordering models. Note that Function F1 returns true if hypothesis H fully covers, or fully contains, constituent XPi, regardless of the reordering type of XPi. Do not confuse any parsing tag XPi with the nameless variables Xi in Hiero or cdec rules. For the semantic reordering models, we also add two new features into the log-linear translation model. To get the two semantic reordering model feature values, we simply use Algorithm 1 and its associated functions from F1 to F5 replacing a CFG rule cfg with a PAS pas, and a constituent XPi with a semantic role Ri. Algorithm 1 therefore permits a unified treatment of syntactic and PAS-based reordering, even though it is expressed in terms of syntactic reordering here for ease of presentation. 4 Experiments We have presented our unified approach to incorporating syntactic and semantic soft reordering constraints in an HPB system. In this section, we test its effectiveness in Chinese-English translation. 4.1 Experimental Settings For training we use 1.6M sentence pairs of the non-UN and non-HK Hansards portions of NIST MT training corpora, segmented with the Stanford segmenter (Tseng et al., 2005). The English data is lowercased, tokenized and aligned with GIZA++ (Och and Ney, 2000) to obtain bidirectional alignments, which are symmetrized using the grow-diag-final-and method (Koehn et al., 2003). We train a 4-gram LM on the English side of the corpus with 600M additional words from non-NYT and non-LAT, randomly selected portions of the Gigaword v4 corpus, using modified Kneser-Ney smoothing (Chen and Goodman, 1996). We use the HPB decoder cdec (Dyer et al., 2010), with Mr. Mira (Eidelman et al., 2013), which is a k-best variant of MIRA (Chiang et al., 2008), to tune the parameters of the system. We use NIST MT 06 dataset (1664 sentence pairs) for tuning, and NIST MT 03, 05, and 08 datasets (919, 1082, and 1357 sentence pairs, respectively) for evaluation.1 We use BLEU (Papineni et al., 2002) for both tuning and evaluation. To obtain syntactic parse trees and semantic roles on the tuning and test datasets, we first parse the source sentences with the Berkeley Parser (Petrov and Klein, 2007), trained on the Chinese Treebank 7.0 (Xue et al., 2005). We then pass the parses to a Chinese semantic role labeler (Li et al., 2010), trained on the Chinese PropBank 3.0 (Xue and Palmer, 2009), to annotate semantic roles for all verbal predicates (partof-speech tag VV, VE, or VC). Our basic baseline system employs 19 basic features: a language model feature, 7 translation model features, word penalty, unknown word penalty, the glue rule, date, number and 6 passthrough features. Our stronger baseline employs, in addition, the fine-grained syntactic soft constraint features of Marton and Resnik (2008), hereafter MR08. The syntactic soft constraint features include both MR08 exact-matching and crossboundary constraints (denoted XP= and XP+). Since the syntactic parses of the tuning and test data contain 29 types of constituent labels and 35 types of POS tags, we have 29 types of XP+ features and 64 types of XP= features. 4.2 Model Training To train the syntactic and semantic reordering models, we use a gold alignment dataset.2 It contains 7,870 sentences with 191,364 Chinese words and 261,399 English words. We first run syn1http://www.itl.nist.gov/iad/mig//tests/mt 2This dataset includes LDC2006E86, and newswire parts of LDC2012T16, LDC2012T20, LDC2012T24, and LDC2013T05. Indeed, the reordering models can also be trained on the MT training data with its automatic alignment. However, our preliminary experiments showed that the reordering models trained on gold alignment yielded higher improvement. 1127 Reordering Type Syntactic Semantic l-m r-m l-m r-m M 73.5 80.6 63.8 67.9 DM 3.9 3.3 14.0 12.0 S 19.5 13.2 13.1 10.7 DS 3.2 3.0 9.1 9.5 #instance 199,234 66,757 Table 3: Reordering type distribution over the reordering model’s training data. Hereafter, l-m and r-m are for leftmost and rightmost, respectively. tactic parsing and semantic role labeling on the Chinese sentences, then train the models by using MaxEnt toolkit with L1 regularizer (Tsuruoka et al., 2009).3 Table 3 shows the reordering type distribution over the training data. Interestingly, about 17% of the syntactic instances and 16% of the semantic instances differ in their leftmost and rightmost reordering types, indicating that the leftmost/rightmost distinction is informative. We also see that the number of semantic instances is about 1/3 of that of syntactic instances, but the entropy of the semantic reordering classes is higher, indicating the reordering of semantic roles is harder than that of syntactic constituents. A deeper examination of the reordering model’s training data reveals that some constituent pairs and semantic role pairs have a preference for a specific reordering type (monotone or swap). In order to understand how well the MR08 system respects their reordering preference, we use the gold alignment dataset LDC2006E86, in which the source sentences are from the Chinese Treebank, and thus both the gold parse trees and gold predicate-argument structures are available. Table 4 presents examples comparing the reordering distribution between gold alignment and the output of the MR08 system. For example, the first row shows that based on the gold alignment, for ⟨PP,VP⟩, 16% are in monotone and 76% are in swap reordering. However, our MR08 system outputs 46% of them in monotone and and 50% in swap reordering. Hence, the reordering accuracy for ⟨PP,VP⟩is 54%. Table 4 also shows that the semantic reordering between core arguments and predicates (e.g., ⟨Pred, A1⟩, ⟨A0, Pred⟩) has a less ambiguous pattern than that between adjuncts and other roles (e.g., ⟨LOC,Pred⟩, ⟨A0,TMP⟩), indicating the higher reordering flexibility of adjuncts. 3http://www.logos.ic.i.u-tokyo.ac.jp/∼tsuruoka/maxent/ Const. Pair Gold MR08 output M S M S acc. PP VP 16 76 46 50 54 NP LC 26 74 58 42 50 DNP NP 24 72 78 19 39 CP NP 26 67 84 10 33 NP DEG 39 61 31 69 66 ... ... ... all 81 13 79 14 80 Role Pair Gold MR08 output M S M S acc. Pred A1 84 6 82 9 72 A0 Pred 82 11 79 8 75 LOC Pred 17 30 36 25 49 A0 TMP 35 25 61 6 45 TMP Pred 30 22 49 19 43 ... ... ... all 63 13 73 9 64 Table 4: Examples of the reordering distribution (%) of gold alignment and the MR08 system output. For simplicity, we only focus on (M)onotone and (S)wap based on leftmost reordering. 4.3 Translation Experiment Results Our first group of experiments investigates whether the syntactic reordering models are able to improve translation quality in terms of BLEU. To this end, we respectively add our syntactic reordering models into both the baseline and MR08 systems. The effect is shown in the rows of “+ synreorder” in Table 5. From the table, we have the following two observations. • Although the HPB model is capable of handling non-local phrase reordering using synchronous context free grammars, both our syntactic leftmost reordering model and rightmost model are still able to achieve improvement over both the baseline and MR08. This suggests that our syntactic reordering features interact well with the MR08 syntactic soft constraints: the XP+ and XP= features focus on a single constituent each, while our reordering features focus on a pair of constituents each. • There is no clear indication of whether the leftmost reordering model works better than the other. In addition, integrating both the leftmost and rightmost reordering models has limited improvement over a single reordering model. Our second group of experiments is to validate the semantic reordering models. Results are 1128 System Tuning Test MT06 MT03 MT05 MT08 Avg. Baseline 34.1 36.1 32.3 27.4 31.9 + synreorder l-m 35.2 36.9‡ 33.6‡ 28.4‡ 33.0 r-m 35.2 37.2‡ 33.7‡ 28.6‡ 33.2 both 35.6 37.1‡ 33.6‡ 28.8‡ 33.1 + semreorder l-m 34.4 36.7‡ 33.0‡ 27.8† 32.5 r-m 34.5 36.7‡ 33.1‡ 27.8‡ 32.5 both 34.5 37.0‡ 33.6‡ 27.7† 32.8 +syn+sem 35.5 37.3‡ 33.7‡ 29.0‡ 33.3 MR08 35.6 37.4 34.2 28.7 33.4 + synreorder l-m 36.0 38.2‡ 35.0‡ 29.2‡ 34.1 r-m 36.0 38.1‡ 34.8‡ 29.2‡ 34.0 both 35.9 38.2‡ 35.3‡ 29.5‡ 34.3 + semreorder l-m 35.8 37.6† 34.7‡ 28.7 33.7 r-m 35.8 37.4 34.5† 28.8 33.6 both 35.8 37.6† 34.7‡ 28.8 33.7 +syn+sem 36.1 38.4‡ 35.2‡ 29.5‡ 34.4 Table 5: System performance in BLEU scores. ‡/†: significant over baseline or MR08 at 0.01 / 0.05, respectively, as tested by bootstrap resampling (Koehn, 2004) shown in the rows of “+ sem-reorder” in Table 5. Here we observe: • The semantic reordering models also achieve significant gain of 0.8 BLEU on average over the baseline system, demonstrating the effectiveness of PAS-based reordering. However, the gain diminishes to 0.3 BLEU on the MR08 system. • The syntactic reordering models outperform the semantic reordering models on both the baseline and MR08 systems. Finally, we integrate both the syntactic and semantic reordering models into the final system. The two models collectively achieve a gain of up to 1.4 BLEU over the baseline and 1.0 BLEU over MR08 on average, which is shown in the rows of “+syn+sem” in Table 5. 5 Discussion The trend of the results, summarized as performance gain over the baseline and MR08 systems averaged over all test sets, is presented in Table 6. The syntactic reordering models outperform the semantic reordering models, and the gain achieved by the semantic reordering models is limited in the presence of the MR08 syntactic features. In this section, we look at MR08 system and the systems improving it to explore the behavior differences between the two reordering models. Coverage analysis: Our statistics show that syntactic reordering features (either leftmost or System Baseline MR08 +syn-reorder 1.2 0.9 +sem-reorder 0.8 0.3 + both 1.4 1.0 Table 6: Performance gain in BLEU over baseline and MR08 systems averaged over all test sets. rightmost) are called 24 times per sentence on average. This is compared to only 9 times per sentence for semantic reordering features. This is not surprising since the semantic reordering features are exclusively attached to predicates, and the span set of the semantic roles is a strict subset of the span set of the syntactic constituents; only 22% of syntactic constituents are semantic roles. On average, a sentences has 4 PASs and each PAS contains 3 semantic roles. Of all the semantic role pairs, 44% are in the same CFG rules, indicating that this part of semantic reordering has overlap with syntactic reordering. Therefore, the PAS model has fewer opportunities to influence reordering. Reordering accuracy analysis: The reordering type distribution on the reordering model training data in Table 3 suggests that semantic reordering is more difficult than syntactic reordering. To validate this conjecture on our translation test data, we compare the reordering performance among the MR08 system, the improved systems and the maximum entropy classifiers. For the test set, we have four reference translations. We run GIZA++ on the data combination of our translation training data and test data to get the alignment for the test data and each reference translation. Once we have the (semi-)gold alignment, we compute the gold reordering types between two adjacent syntactic constituents or semantic roles. Then we evaluate the automatic reordering outputs generated from both our translation systems and maximum entropy classifiers. Table 7 shows the accuracy averaged over the four gold reordering sets (the four reference translations). It shows that 1) as expected, our classifiers do worse on the harder semantic reordering prediction than syntactic reordering prediction; 2) thanks to the high accuracy obtained by the maxent classifiers, integrating either the syntactic or the semantic reordering constraints results in better reordering performance from both syntactic and semantic perspectives; 3) in terms of the mutual impact, the syntactic reordering models help improving semantic reordering more than the semantic reordering 1129 System Syntactic Semantic l-m r-m l-m r-m MR08 75.0 78.0 66.3 68.5 +syn-reorder 78.4 80.9 69.0 70.2 +sem-reorder 76.0 78.8 70.7 72.7 +both 78.6 81.7 70.6 72.1 Maxent Classifier 80.7 85.6 70.9 73.5 Table 7: Reordering accuracy on four gold sets. System Syntactic Semantic l-m r-m l-m r-m +syn-reorder 1.2 1.2 +sem-reorder 0.7 0.9 +both 1.2 1.0 0.5 0.4 Table 8: Reordering feature weights. models help improving syntactic reordering; and 4) the rightmost models have a learnability advantage over the leftmost models, achieving higher accuracy across the board. Feature weight analysis: Table 8 shows the syntactic and semantic reordering feature weights. It shows that the semantic feature weights decrease in the presence of the syntactic features, indicating that the decoder learns to trust semantic features less in the presence of the more accurate syntactic features. This is consistent with our observation that semantic reordering is harder than syntactic reordering, as seen in Tables 3 and 7. Potential improvement analysis: Table 7 also shows that our current maximum entropy classifiers have room for improvement, especially for semantic reordering. In order to explore the error propagation from the classifiers themselves and explore the upper bound for improvement from the reordering models, we perform an “oracle” study, letting the classifiers be aware of the “gold” reordering type between two syntactic constituents or two semantic roles, and returning a higher probability for the gold reordering type and a smaller one for the others (i.e., we set 0.9 for the gold System MT 03 MT 05 MT 08 Avg. NonOracle MR08 37.4 34.2 28.7 33.4 +synreorder 38.2 35.3 29.5 34.3 +semreorder 37.6 34.7 28.8 33.7 + both 38.4 35.2 29.5 34.4 Oracle +synreorder 39.2 35.9 29.6 34.9 +semreorder 37.9 34.8 28.9 33.9 + both 39.1 36.0 29.8 35.0 Table 9: Performance (BLEU score) comparison between non-oracle and oracle experiments. reordering type, and let the other non-gold three types share 0.1). Again, to get the gold reordering type, we run GIZA++ to get the alignment for tuning/test source sentences and each of four reference translations. We report the averaged performance by using the gold reordering type extracted from the four reference translations. Table 9 compares the performance between the nonoracle and oracle settings. We clearly see that using gold syntactic reordering types significantly improves the performance (e.g., 34.9 vs. 33.4 on average) and there is still some room for improvement by building a better maximum entropy classifiers (e.g., 34.9 vs. 34.3). To our surprise, however, the improvement achieved by gold semantic reordering types is still small (e.g., 33.9 vs. 33.4), suggesting that the potential improvement of semantic reordering models is much more limited. And we again see that the improvement achieved by semantic reordering models is limited in the presence of the syntactic reordering models. 6 Related Work Syntax-based reordering: Some previous work pre-ordered words in the source sentences, so that the word order of source and target sentences is similar. The reordering rules were either manually designed (Collins et al., 2005; Wang et al., 2007; Xu et al., 2009; Lee et al., 2010) or automatically learned (Xia and McCord, 2004; Genzel, 2010; Visweswariah et al., 2010; Khalilov and Sima’an, 2011; Lerner and Petrov, 2013), using syntactic parses. Li et al. (2007) focused on finding the n-best pre-ordered source sentences by predicting the reordering of sibling constituents, while Yang et al. (2012) obtained word order by using a reranking approach to reposition nodes in syntactic parse trees. Both are close to our work; however, our model generates reordering features that are integrated into the log-linear translation model during decoding. Another approach in previous work added soft constraints as weighted features in the SMT decoder to reward good reorderings and penalize bad ones. Marton and Resnik (2008) employed soft syntactic constraints with weighted binary features and no MaxEnt model. They did not explicitly target reordering (beyond applying constraints on HPB rules). Although employing linguistically motivated labels in SCFG is capable of capturing constituent reorderings (Chiang, 2010; Mylon1130 akis and Sima’an, 2011), the rules are sparser than SCFG with nameless non-terminals (i.e., Xs) and soft constraints. Ge (2010) presented a syntaxdriven maximum entropy reordering model that predicted the source word translation order. Gao et al. (2011) employed dependency trees to predict the translation order of a word and its head word. Huang et al. (2013) predicted the translation order of two source words.4 Our work, which shares this approach, differs from their work primarily in that our syntactic reordering models are based on the constituent level, rather than the word level. Semantics-based reordering: Semanticsbased reordering has also seen an increase in activity recently. In the pre-ordering approach, Wu et al. (2011) automatically learned pre-ordering rules from PAS. In the soft constraint or reordering model approach, Liu and Gildea (2010) modeled the reordering/deletion of source-side semantic roles in a tree-to-string translation model. Xiong et al. (2012) and Li et al. (2013) predicted the translation order between either two arguments or an argument and its predicate. Instead of decomposing a PAS into individual units, Zhai et al. (2013) constructed a classifier for each source side PAS. Finally in the post-processing approach category, Wu and Fung (2009) performed semantic role labeling on translation output and reordered arguments to maximize the cross-lingual match of the semantic frames between the source sentence and the target translation. To our knowledge, their semantic reordering models were PAS-specific. In contrast, our model is universal and can be easily adopted to model the reordering of other linguistic units (e.g., syntactic constituents). Moreover, we have studied the effectiveness of the semantic reordering model in different scenarios. Non-syntax-based reorderings in HPB: Recently we have also seen work on lexicalized reordering models without syntactic information in HPB (Setiawan et al., 2009; Huck et al., 2013; Nguyen and Vogel, 2013). The non-syntaxbased reordering approach models the reordering of translation words/phrases while the syntaxbased approach models the reordering of syntactic constituents. Although there are overlaps between translation phrases and syntactic constituents, it is reasonable to think that the two re4Note that they obtained the translation order of source word pairs by predicting the reordering of adjacent constituents, which was quite close to our work. ordering approaches can work together well and even complement each other, as the linguistic patterns they capture differ substantially. Setiawan et al. (2013) modeled the orientation decisions between anchors and two neighboring multi-unit chunks which might cross phrase or rule boundaries. Last, we also note that recent work on nonsyntax-based reorderings in (flat) phrase-based models (Cherry, 2013; Feng et al., 2013) can also be potentially adopted to hpb models. 7 Conclusion and Future Work In this paper, we have presented a unified reordering framework to incorporate soft linguistic constraints (of syntactic or semantic nature) into the HPB translation model. The syntactic reordering models take CFG rules and model their reordering on the target side, while the semantic reordering models work with PAS. Experiments on ChineseEnglish translation show that the reordering approach can significantly improve a state-of-the-art hierarchical phrase-based translation system. We have also discussed the differences between the two linguistic reordering models. There are many directions in which this work can be continued. First, the syntactic reordering model can be extended to model reordering among constituents that cross CFG rules. Second, although we do not see obvious gain from the semantic reordering model when the syntactic model is adopted, it might be beneficial to further jointly consider the two reordering models, focusing on where each one does well. Third, to better examine the overlap or synergy between our approach and the non-syntax-based reordering approach, we will conduct direct comparisons and combinations with the latter. Acknowledgments This research was supported in part by the BOLT program of the Defense Advanced Research Projects Agency, Contract No. HR001212-C-0015. Any opinions, findings, conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the view of DARPA. The authors would like to thank three anonymous reviewers for providing helpful comments, and also acknowledge Ke Wu, Vladimir Eidelman, Hua He, Doug Oard, Yuening Hu, Jordan Boyd-Graber, and Jyothi Vinjumur for useful discussions. 1131 References Stanley F. Chen and Joshua Goodman. 1996. An empirical study of smoothing techniques for language modeling. In Proceedings of ACL 1996, pages 310– 318. Colin Cherry. 2013. Improved reordering for phrasebased translation using sparse features. In Proceedings of HLT-NAACL 2013, pages 22–31. David Chiang, Yuval Marton, and Philip Resnik. 2008. Online large-margin training of syntactic and structural translation features. In Proceedings of EMNLP 2008, pages 224–233. David Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics, 33(2):201–228. David Chiang. 2010. Learning to translate with source and target syntax. In Proceedings of ACL 2010, pages 1443–1452. Michael Collins, Philipp Koehn, and Ivona Kucerova. 2005. Clause restructuring for statistical machine translation. In Proceedings of ACL 2005, pages 531–540. Chris Dyer, Adam Lopez, Juri Ganitkevitch, Jonathan Weese, Ferhan Ture, Phil Blunsom, Hendra Setiawan, Vladimir Eidelman, and Philip Resnik. 2010. cdec: A decoder, alignment, and learning framework for finite-state and context-free translation models. In Proceedings of ACL 2010 System Demonstrations, pages 7–12. Vladimir Eidelman, Ke Wu, Ferhan Ture, Philip Resnik, and Jimmy Lin. 2013. Mr. mira: Opensource large-margin structured learning on mapreduce. In Proceedings of ACL 2013 System Demonstrations, pages 199–204. Minwei Feng, Jan-Thorsten Peter, and Hermann Ney. 2013. Advancements in reordering models for statistical machine translation. In Proceedings of ACL 2013, pages 322–332. Yang Gao, Philipp Koehn, and Alexandra Birch. 2011. Soft dependency constraints for reordering in hierarchical phrase-based translation. In Proceedings of EMNLP 2011, pages 857–868. Niyu Ge. 2010. A direct syntax-driven reordering model for phrase-based machine translation. In Proceedings of HLT-NAACL 2010, pages 849–857. Dmitriy Genzel. 2010. Automatically learning sourceside reordering rules for large scale machine translation. In Proceedings of COLING 2010, pages 376– 384. Zhongqiang Huang, Jacob Devlin, and Rabih Zbib. 2013. Factored soft source syntactic constraints for hierarchical machine translation. In Proceedings of EMNLP 2013, pages 556–566. Matthias Huck, Joern Wuebker, Felix Rietig, and Hermann Ney. 2013. A phrase orientation model for hierarchical machine translation. In Proceedings of WMT 2013, pages 452–463. Maxim Khalilov and Khalil Sima’an. 2011. Contextsensitive syntactic source-reordering by statistical transduction. In Proceedings of IJCNLP 2011, pages 38–46. Philipp Koehn, Franz J. Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of HLT-NAACL 2003, pages 48–54. Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of EMNLP 2004, pages 388–395. Young-Suk Lee, Bing Zhao, and Xiaoqian Luo. 2010. Constituent reordering and syntax models for English-to-Japanese statistical machine translation. In Proceedings of COLING 2010, pages 626–634. Uri Lerner and Slav Petrov. 2013. Source-side classifier preordering for machine translation. In Proceedings of EMNLP 2013, pages 513–523. Chi-Ho Li, Minghui Li, Dongdong Zhang, Mu Li, Ming Zhou, and Yi Guan. 2007. A probabilistic approach to syntax-based reordering for statistical machine translation. In Proceedings of ACL 2007, pages 720–727. Junhui Li, Guodong Zhou, and Hwee Tou Ng. 2010. Joint syntactic and semantic parsing of Chinese. In Proceedings of ACL 2010, pages 1108–1117. Junhui Li, Philip Resnik, and Hal Daum´e III. 2013. Modeling syntactic and semantic structures in hierarchical phrase-based translation. In Proceedings of HLT-NAACL 2013, pages 540–549. Ding Liu and Daniel Gildea. 2010. Semantic role features for machine translation. In Proceedings of COLING 2010, pages 716–724. Yuval Marton and Philip Resnik. 2008. Soft syntactic constraints for hierarchical phrased-based translation. In Proceedings of ACL-HLT 2008, pages 1003–1011. Markos Mylonakis and Khalil Sima’an. 2011. Learning hierarchical translation structure with linguistic annotations. In Proceedings of ACL 2011, pages 642–652. ThuyLinh Nguyen and Stephan Vogel. 2013. Integrating phrase-based reordering features into a chartbased decoder for machine translation. In Proceedings of ACL 2013, pages 1587–1596. Franz Josef Och and Hermann Ney. 2000. Improved statistical alignment models. In Proceedings of ACL 2000, pages 440–447. 1132 Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The Proposition Bank: An annotated corpus of semantic roles. Computational Linguistics, 31(1):71–106. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: A method for automatic evaluation of machine translation. In Proceedings of ACL 2002, pages 311–318. Slav Petrov and Dan Klein. 2007. Improved inference for unlexicalized parsing. In Proceedings of HLTNAACL 2007, pages 404–411. Hendra Setiawan, Min Yen Kan, Haizhou Li, and Philip Resnik. 2009. Topological ordering of function words in hierarchical phrase-based translation. In Proceedings of ACL-IJCNLP 2009, pages 324–332. Hendra Setiawan, Bowen Zhou, Bing Xiang, and Libin Shen. 2013. Two-neighbor orientation model with cross-boundary global contexts. In Proceedings of ACL 2013, pages 1264–1274. Huihsin Tseng, Pichuan Chang, Galen Andrew, Daniel Jurafsky, and Christopher Manning. 2005. A conditional random field word segmenter for sighan bakeoff 2005. In Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing, pages 168–171. Yoshimasa Tsuruoka, Jun’ichi Tsujii, and Sophia Ananiadou. 2009. Stochastic gradient descent training for l1-regularized log-linear models with cumulative penalty. In Proceedings of ACL-IJCNLP 2009, pages 477–485. Karthik Visweswariah, Jiri Navratil, Jeffrey Sorensen, Vijil Chenthamarakshan, and Nandakishore Kambhatla. 2010. Syntax based reordering with automatically derived rules for improved statistical machine translation. In Proceedings of COLING 2010, pages 1119–1127. Chao Wang, Michael Collins, and Philipp Koehn. 2007. Chinese syntactic reordering for statistical machine translation. In Proceedings of EMNLP 2007, pages 737–745. Dekai Wu and Pascale Fung. 2009. Semantic roles for smt: A hybrid two-pass model. In Proceedings of HLT-NAACL 2009: short papers, pages 13–16. Xianchao Wu, Katsuhito Sudoh, Kevin Duh, Hajime Tsukada, and Masaaki Nagata. 2011. Extracting pre-ordering rules from predicate-argument structures. In Proceedings of IJCNLP 2011, pages 29– 37. Fei Xia and Michael McCord. 2004. Improving a statistical mt system with automatically learned rewrite patterns. In Proceedings of COLING 2004, pages 508–514. Deyi Xiong, Min Zhang, and Haizhou Li. 2012. Modeling the translation of predicate-argument structure for smt. In Proceedings of ACL 2012, pages 902– 911. Peng Xu, Jaeho Kang, Michael Ringgaard, and Franz Och. 2009. Using a dependency parser to improve smt for subject-object-verb languages. In Proceedings of HLT-NAACL 2009, pages 245–253. Nianwen Xue and Martha Palmer. 2009. Adding semantic roles to the Chinese Treebank. Natural Language Engineering, 15(1):143–172. Nianwen Xue, Fei Xia, Fu-Dong Chiou, and Martha Palmer. 2005. The Penn Chinese Treebank: Phrase structure annotation of a large corpus. Natural Language Engineering, 11(2):207–238. Nan Yang, Mu Li, Dongdong Zhang, and Nenghai Yu. 2012. A ranking-based approach to word reordering for statistical machine translation. In Proceedings of ACL 2012, pages 912–920. Feifei Zhai, Jiajun Zhang, Yu Zhou, and Chengqing Zong. 2013. Handling ambiguities of bilingual predicate-argument structures for statistical machine translation. In Proceedings of ACL 2013, pages 1127–1136. 1133
2014
106
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1134–1144, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Are Two Heads Better than One? Crowdsourced Translation via a Two-Step Collaboration of Non-Professional Translators and Editors Rui Yan, Mingkun Gao, Ellie Pavlick, and Chris Callison-Burch Computer and Information Science Department, University of Pennsylvania, Philadelphia, PA 19104, U.S.A. {ruiyan,gmingkun,epavlick}@seas.upenn.edu, [email protected] Abstract Crowdsourcing is a viable mechanism for creating training data for machine translation. It provides a low cost, fast turnaround way of processing large volumes of data. However, when compared to professional translation, naive collection of translations from non-professionals yields low-quality results. Careful quality control is necessary for crowdsourcing to work well. In this paper, we examine the challenges of a two-step collaboration process with translation and post-editing by non-professionals. We develop graphbased ranking models that automatically select the best output from multiple redundant versions of translations and edits, and improves translation quality closer to professionals. 1 Introduction Statistical machine translation (SMT) systems are trained using bilingual sentence-aligned parallel corpora. Theoretically, SMT can be applied to any language pair, but in practice it produces the state-of-art results only for language pairs with ample training data, like English-Arabic, EnglishChinese, French-English, etc. SMT gets stuck in a severe bottleneck for many minority or ‘low resource’ languages with insufficient data. This drastically limits which languages SMT can be successfully applied to. Because of this, collecting parallel corpora for minor languages has become an interesting research challenge. There are various options for creating training data for new language pairs. Past approaches have examined harvesting translated documents from the web (Resnik and Smith, 2003; Uszkoreit et al., 2010; Smith et al., 2013), or discovering parallel fragments from comparable corpora (Munteanu and Marcu, 2005; Abdul-Rauf and Schwenk, 2009; Smith et al., 2010). Until relatively recently, little consideration has been given to creating parallel data from scratch. This is because the cost of hiring professional translators is prohibitively high. For instance, Germann (2001) hoped to hire professional translators to create a modest sized 100,000 word Tamil-English parallel corpus, but were stymied by the costs and the difficulty of finding good translators for a short-term commitment. Recently, crowdsourcing has opened the possibility of translating large amounts of text at low cost using non-professional translators. Facebook localized its web site into different languages using volunteers (TechCrunch, 2008). DuoLingo turns translation into an educational game, and translates web content using its language learners (von Ahn, 2013). Rather than relying on volunteers or gamification, NLP research into crowdsourcing translation has focused on hiring workers on the Amazon Mechanical Turk (MTurk) platform (CallisonBurch, 2009). This setup presents unique challenges, since it typically involves non-professional translators whose language skills are varied, and since it sometimes involves participants who try to cheat to get the small financial reward (Zaidan and Callison-Burch, 2011). A natural approach for trying to shore up the skills of weak bilinguals is to pair them with a native speaker of the target language to edit their translations. We review relevant research from NLP and human-computer interaction (HCI) on collaborative translation processes in Section 2. To sort good translations from bad, researchers often solicit multiple, redundant translations and then build models to try to predict which translations are the best, or which translators tend to produce the highest quality translations. The contributions of this paper are: 1134 • An analysis of the difficulties posed by a twostep collaboration between editors and translators in Mechanical Turk-style crowdsourcing environments. Editors vary in quality, and poor editing can be difficult to detect. • A new graph-based algorithm for selecting the best translation among multiple translations of the same input. This method takes into account the collaborative relationship between the translators and the editors. 2 Related work In the HCI community, several researchers have proposed protocols for collaborative translation efforts (Morita and Ishida, 2009b; Morita and Ishida, 2009a; Hu, 2009; Hu et al., 2010). These have focused on an iterative collaboration between monolingual speakers of the two languages, facilitated with a machine translation system. These studies are similar to ours in that they rely on native speakers’ understanding of the target language to correct the disfluencies in poor translations. In our setup the poor translations are produced by bilingual individuals who are weak in the target language, and in their experiments the translations are the output of a machine translation system.1 Another significant difference is that the HCI studies assume cooperative participants. For instance, Hu et al. (2010) recruited volunteers from the International Children’s Digital Library (Hourcade et al., 2003) who were all well intentioned and participated out a sense of altruism and to build a good reputation among the other volunteer translators at childrenslibrary.org. Our setup uses anonymous crowd workers hired on Mechanical Turk, whose motivation to participate is financial. Bernstein et al. (2010) characterized the problems with hiring editors via MTurk for a word processing application. Workers were either lazy (meaning they made only minimal edits) or overly zealous (meaning they made many unnecessary edits). Bernstein et al. (2010) addressed this problem with a three step find-fix-verify process. In the first step, workers click on one word or phrase that needed to be corrected. In the next step, a separate group of workers proposed correc1A variety of HCI and NLP studies have confirmed the efficacy of monolingual or bilingual individuals post-editing of machine translation output (Callison-Burch, 2005; Koehn, 2010; Green et al., 2013). Past NLP work has also examined automatic post-editing(Knight and Chander, 1994). tions to problematic regions that had been identified by multiple workers in the first pass. In the final step, other workers would validate whether the proposed corrections were good. Most NLP research into crowdsourcing has focused on Mechanical Turk, following pioneering work by Snow et al. (2008) who showed that the platform was a viable way of collecting data for a wide variety of NLP tasks at low cost and in large volumes. They further showed that non-expert annotations are similar to expert annotations when many non-expert labelings for the same input are aggregated, through simple voting or through weighting votes based on how closely non-experts matched experts on a small amount of calibration data. MTurk has subsequently been widely adopted by the NLP community and used for an extensive range of speech and language applications (Callison-Burch and Dredze, 2010). Although hiring professional translators to create bilingual training data for machine translation systems has been deemed infeasible, Mechanical Turk has provided a low cost way of creating large volumes of translations (Callison-Burch, 2009; Ambati and Vogel, 2010). For instance, Zbib et al. (2012; Zbib et al. (2013) translated 1.5 million words of Levine Arabic and Egyptian Arabic, and showed that a statistical translation system trained on the dialect data outperformed a system trained on 100 times more MSA data. Post et al. (2012) used MTurk to create parallel corpora for six Indian languages for less than $0.01 per word. MTurk workers translated more than half a million words worth of Malayalam in less than a week. Several researchers have examined the use of active learning to further reduce the cost of translation (Ambati et al., 2010; Ambati, 2012; Bloodgood and Callison-Burch, 2010). Crowdsourcing allowed real studies to be conducted whereas most past active learning were simulated. Pavlick et al. (2014) conducted a large-scale demographic study of the languages spoken by workers on MTurk by translating 10,000 words in each of 100 languages. Chen and Dolan (2012) examined the steps necessary to build a persistent multilingual workforce on MTurk. This paper is most closely related to previous work by Zaidan and Callison-Burch (2011), who showed that non-professional translators could approach the level of professional translators. They solicited multiple redundant translations from dif1135 Urdu translator: According to the territory’s people the pamphlets from the Taaliban had been read in the announcements in all the mosques of the Northern Wazeerastan. English post-editor: According to locals, the pamphlet released by the Taliban was read out on the loudspeakers of all the mosques in North Waziristan. LDC professional: According to the local people, the Taliban’s pamphlet was read over the loudspeakers of all mosques in North Waziristan. Table 1: Different versions of translations. ferent Turkers for a collection of Urdu sentences that had been previously professionally translated by the Linguistics Data Consortium. They built a model to try to predict on a sentence-by-sentence and Turker-by-Turker which was the best translation or translator. They also hired US-based Turkers to edit the translations, since the translators were largely based in Pakistan and exhibited errors that are characteristic of speakers of English as a language. Zaidan and Callison-Burch (2011) observed only modest improvements when incorporating these edited translation into their model. We attempt to analyze why this is, and we proposed a new model to try to better leverage their data. 3 Crowdsourcing Translation Setup We conduct our experiments using the data collected by Zaidan and Callison-Burch (2011). This data set consists 1,792 Urdu sentences from a variety of news and online sources, each paired with English translations provided by non-professional translators on Mechanical Turk. Each Urdu sentence was translated redundantly by 3 distinct translators, and each translation was edited by 3 separate (native English-speaking) editors to correct for grammatical and stylistic errors. In total, this gives us 12 non-professional English candidate sentences (3 unedited, 9 edited) per original Urdu sentence. 52 different Turkers took part in the translation task, each translating 138 sentences on average. In the editing task, 320 Turkers participated, averaging 56 sentences each. For comparison, the data also includes 4 different reference translations for each source sentence, produced by professional translators. Table 1 gives an example of an unedited translation, an edited translation, and a professional translation for the same sentence. The translations provided by translators on MTurk are generally done conscientiously, preserving the meaning of the source sentence, but typically contain simple mistakes like misspellings, typos, and awkward word choice. English-speaking editors, despite having no knowledge of the source language, are able to fix these errors. In this work, we show that the collaboration design of two heads– non-professional Urdu translators and nonprofessional English editors– yields better translated output than would either one working in isolation, and can better approximate the quality of professional translators. Analysis We know from inspection that translations seem to improve with editing (Table 1). Given the data from MTurk, we explore whether this is the case in general: Do all translations improve with editing? To what extent does the individual translator and the individual editor effect the quality of the final sentence? Figure 1: Relationship between editor aggressiveness and effectiveness. Each point represents an editor/translation pair. Aggressiveness (x-axis) is measured as the TER between the pre-edit and post-edit version of the translation, and effectiveness (y-axis) is measured as the average amount by which the editing reduces the translation’s TERgold. While many editors make only a few changes, those who make many changes can bring the translation substantially closer to professional quality. We use translation edit rate (TER) as a measure of translation similarity. TER represents the amount of change necessary to transform one sentence into another, so a low TER means the two 1136 0.02 0.05 0.07 ∆ TERgold Editor ∆ TERgold 0.03 0.50 0.00 0.01 0.02 ∆ TERgold Editor ∆ TERgold 0.01 0.03 -0.03 -0.01 0.01 ∆ TERgold Editor ∆ TERgold -0.01 0.01 -0.08 -0.04 -0.00 ∆ TERgold Editor ∆ TERgold -0.03 -0.01 0.3 0.9 0.2 0.3 0.2 0.2 0.1 0.2 0.0 0.1 Translation TERgold -0.30 -0.15 -0.01 ∆ TERgold Editor ∆ TERgold -0.64 -0.03 Figure 2: Effect of editing on translations of varying quality. Rows reflect bins of editors, with the worse editors (those whose changes result in increased TERgold) on the top and the most effective editors (those whose changes result in the largest reduction in TERgold) on the bottom. Bars reflect bins of translations, with the highest TERgold translations on the left, and the lowest on the right. We can see from the consistently negative ∆TERgold in the bottom row that good editors are able to improve both good and bad translations. sentences are very similar. To capture the quality (“professionalness”) of a translation, we take the average TER of the translation against each of our gold translations. That is, we define TERgold of translation t as TERgold = 1 4 4 X i=1 TER(goldi, t) (1) where a lower TERgold is indicative of a higher quality (more professional-sounding) translation. We first look at editors along two dimensions: their aggressiveness and their effectiveness. Some editors may be very aggressive (they make many changes to the original translation) but still be ineffective (they fail to bring the quality of the translation closer to that of a professional). We measure aggressiveness by looking at the TER between the pre- and post-edited versions of each editor’s translations; higher TER implies more aggressive editing. To measure effectiveness, we look at the change in TERgold that results from the editing; negative ∆TERgold means the editor effectively improved the quality of the translation, while positive ∆TERgold means the editing actually brought the translation further from our gold standard. Figure 1 shows the relationship between these two qualities for individual editor/translation pairs. We see that while most translations require only a few edits, there are a large number of translations which improve substantially after heavy editing. This trend conforms to our intuition that editing is most useful when the translation has much room for improvement, and opens the question of whether good editors can offer improvements to translations of all qualities. To address this question, we split our translations into 5 bins, based on their TERgold. We also split our editors into 5 bins, based on their effectiveness (i.e. the average amount by which their editing reduces TERgold). Figure 2 shows the degree to which editors at each level are able to improve the translations from each bin. We see that good editors are able to make improvements to translations of all qualities, but that good editing has the greatest impact on lower quality translations. This result suggests that finding good editor/translator pairs, rather than good editors and good translators in isolation, should produce the best translations overall. Figure 3 gives an example of how an initially medium-quality translation, when combined with good editing, produces a better result than the higher-quality translation paired with mediocre editing. 4 Problem Formulation The problem definition of the crowdsourcing translation task is straightforward: given a set of candidate translations for a source sentence, we want to choose the best output translation. This output translation is the result of the combined translation and editing stages. Therefore, our method operates over a heterogeneous network that includes translators and post-editors as well as the translated sentences that they produce. We frame the problem as follows. We form two graphs: the first graph (GT ) represents Turkers (translator/editor pairs) as nodes; the second graph (GC) represents candidate translated and 1137 Figure 3: Three alternative translations (left) and the edited versions of each (right). Each edit on the right was produced by a different editor. Order reflects the TERgold of each translation, with the lowest TERgold on the top. Some translators receive low TERgold scores due to superficial errors, which can be easily improved through editing. In the above example, the middle-ranked translation (green) becomes the best translation after being revised by a good editor. post-edited sentences (henceforth “candidates”) as nodes. These two graphs, GT and GC are combined as subgraphs of a third graph (GTC). Edges in GTC connect author pairs (nodes in GT ) to the candidate that they produced (nodes in GC). Together, GT , GC, and GTC define a co-ranking problem (Yan et al., 2012a; Yan et al., 2011b; Yan et al., 2012b) with linkage establishment (Yan et al., 2011a; Yan et al., 2012c), which we define formally as follows. Let G denote the heterogeneous graph with nodes V and edges E. Let G = (V ,E) = (VT , VC, ET , EC, ETC). G is divided into three subgraphs, GT , GC, and GTC. GC = (VC, EC) is a weighted undirected graph representing the candidates and their lexical relationships to one another. Let VC denote a collection of translated and edited candidates, and EC the lexical similarity between the candidates (see Section 4.3 for details). GT = (VT , ET ) is a weighted undirected graph representing collaborations between Turkers. VT is the set of translator/editor pairs. Edges ET connect translator/editor pairs in VT which share a translator and/or editor. Each collaboration (i.e. each node in VT ) produces a candidate (i.e. a node in VC). GTC = (VTC, ETC) is an unweighted bipartite graph that ties GT and GC together and represents “authorship”. The graph G consists of nodes VTC = VT ∪VC and edges ETC connecting each candidate with its authoring translator/post-editor pair. The three sub-networks (GT , GC, and GTC) are illustrated in Figure 4. 4.1 Inter-Graph Ranking The framework includes three random walks, one on GT , one on GC and one on GTC. A random walk on a graph is a Markov chain, its states being the vertices of the graph. It can be described by a stochastic square matrix, where the dimension is the number of vertices in the graph, and the entries describe the transition probabilities from one vertex to the next. The mutual reinforcement framework couples the two random walks on GT and GC that rank candidates and Turkers in isolation. The ranking method allows us to obtain a global ranking by taking into account the intra/inter-component dependencies. In the following sections, we describe how we obtain the rankings on GT and GC, and then move on to discuss how the two are coupled. Our algorithm aims to capture the following intuitions. A candidate is important if 1) it is similar to many of the other proposed candidates and 2) it is authored by better qualified translators and/or post-editors. Analogously, a translator/editor pair is believed to be better qualified if 1) the editor is collaborating with a good translator and vice versa and 2) the pair has authored important candidates. This ranking schema is actually a reinforced process across the heterogeneous graphs. We use two vectors c = [π(c)]|c|×1 and t = [π(t)]|t|×1 to denote the saliency scores π(.) of candidates and Turker pairs. The above-mentioned intuitions can be formulated as follows: • Homogeneity. We use adjacency matrix 1138 Figure 4: 2-step collaborative crowdsourcing translation model based on graph ranking framework including three sub-networks. The undirected links between users denotes translation-editing collaboration. The undirected links between candidate translations indicate lexical similarity between candidates. A bipartite graph ties candidate and Turker networks together by authorship (to make the figure clearer, some linkage is omitted). A dashed circle indicates the group of candidate translations for a single source sentence to translate. [M]|c|×|c| to describe the homogeneous affinity between candidates and [N]|t|×|t| to describe the affinity between Turkers. c ∝MT c, t ∝NT t (2) where c = |VC| is the number of vertices in the candidate graph and t = |VT | is the number of vertices in the Turker graph. The adjacency matrix [M] denotes the transition probabilities between candidates, and analogously matrix [N] denotes the affinity between Turker collaboration pairs. • Heterogeneity. We use an adjacency matrix [ ˆW]|c|×|t| and [ ¯W]|t|×|c| to describe the authorship between the output candidate and the producer Turker pair from both of the candidate-to-Turker and Turker-to-candidate perspectives. c ∝ˆW T t, t ∝¯W T c (3) All affinity matrices will be defined in the next section. By fusing the above equations, we can have the following iterative calculation in matrix forms. For numerical computation of the saliency scores, the initial scores of all sentences and Turkers are set to 1 and the following two steps are alternated until convergence to select the best candidate. Step 1: compute the saliency scores of candidates, and then normalize using ℓ-1 norm. c(n) = (1 −λ)MT c(n−1) + λ ˆWt(n−1) c(n) = c(n)/||c(n)||1 (4) Step 2: compute the saliency scores of Turker pairs, and then normalize using ℓ-1 norm. t(n) = (1 −λ)NT t(n−1) + λ ¯Wc(n−1) t(n) = t(n)/||t(n)||1 (5) where λ specifies the relative contributions to the saliency score trade-off between the homogeneous affinity and the heterogeneous affinity. In order to guarantee the convergence of the iterative form, we must force the transition matrix to be stochastic and irreducible. To this end, we must make the c and t column stochastic (Langville and Meyer, 2004). c and t are therefore normalized after each iteration of Equation (4) and (5). 4.2 Intra-Graph Ranking The standard PageRank algorithm starts from an arbitrary node and randomly selects to either follow a random out-going edge (considering the weighted transition matrix) or to jump to a random node (treating all nodes with equal probability). 1139 In a simple random walk, it is assumed that all nodes in the transitional matrix are equi-probable before the walk starts. Then c and t are calculated as: c = µMT c + (1 −µ) 1 |VC| (6) and t = µNT t + (1 −µ) 1 |VT | (7) where 1 is a vector with all elements equaling to 1 and the size is correspondent to the size of VC or VT . µ is the damping factor usually set to 0.85, as in the PageRank algorithm. 4.3 Affinity Matrix Establishment We introduce the affinity matrix calculation, including homogeneous affinity (i.e., M, N) and heterogeneous affinity (i.e., ˆW, ¯W). As discussed, we model the collection of candidates as a weighted undirected graph, GC, in which nodes in the graph represent candidate sentences and edges represent lexical relatedness. We define an edge’s weight to be the cosine similarity between the candidates represented by the nodes that it connects. The adjacency matrix M describes such a graph, with each entry corresponding to the weight of an edge. F(ci, cj) = ci · cj ||ci||||cj|| Mij = F(ci, cj) P k F(ci, ck) (8) where F(.) is the cosine similarity and c is a term vector corresponding to a candidate. We treat a candidate as a short document and weight each term with tf.idf (Manning et al., 2008), where tf is the term frequency and idf is the inverse document frequency. The Turker graph, GT , is an undirected graph whose edges represent “collaboration.” Formally, let ti and tj be two translator/editor pairs; we say that pair ti “collaborates with” pair tj (and therefore, there is an edge between ti and tj) if ti and tj share either a translator or an editor (or share both a translator and an editor). Let the function I(ti, tj) denote the number of “collaborations” (#col) between ti and tj. I(ti, tj) = ( #col (eij ∈ET ) 0 otherwise , (9) Then the adjacency matrix N is then defined as Nij = I(ti, tj) P k I(ti, tk) (10) In the bipartite candidate-Turker graph GTC, the entry ETC(i, j) is an indicator function denoting whether the candidate ci is generated by tj: A(ci, tj) = ( 1 (eij ∈ETC) 0 otherwise (11) Through ETC we define the weight matrices ¯Wij and ˆWij, containing the conditional probabilities of transitions from ci to tj and vice versa: ¯Wij = A(ci, tj) P k A(ci, tk), ˆWij = A(ci, tj) P k A(ck, tj) (12) 5 Evaluation We are interested in testing our random walk method, which incorporates information from both the candidate translations and from the Turkers. We want to test two versions of our proposed collaborative co-ranking method: 1) based on the unedited translations only and 2) based on the edited sentences after translator/editor collaborations. Metric Since we have four professional translation sets, we can calculate the Bilingual Evaluation Understudy (BLEU) score (Papineni et al., 2002) for one professional translator (P1) using the other three (P2,3,4) as a reference set. We repeat the process four times, scoring each professional translator against the others, to calculate the expected range of professional quality translation. In the following sections, we evaluate each of our methods by calculating BLEU scores against the same four sets of three reference translations. Therefore, each number reported in our experimental results is an average of four numbers, corresponding to the four possible ways of choosing 3 of the 4 reference sets. This allows us to compare the BLEU score achieved by our methods against the BLEU scores achievable by professional translators. Baselines As a naive baseline, we choose one candidate translation at random for each input Urdu sentence. To establish an upper bound for our methods, and to determine if there exist highquality Turker translations at all, we compute four 1140 Reference (Avg.) 42.51 Oracle (Seg-Trans) 44.93 Oracle (Seg-Trans+Edit) 48.44 Oracle (Turker-Trans) 38.66 Oracle (Turker-Trans+Edit) 39.16 Random 30.52 Lowest TER 35.78 Graph Ranking (Trans) 38.88 Graph Ranking (Trans+Edit) 41.43 Table 2: Overall BLEU performance for all methods (with and without post-editing). The highlighted result indicates the best performance, which is based on both candidate sentences and Turker information. oracle scores. The first oracle operates at the segment level on the sentences produced by translators only: for each source segment, we choose from the translations the one that scores highest (in terms of BLEU) against the reference sentences. The second oracle is applied similarly, but chooses from the candidates produced by the collaboration of translator/post-editor pairs. The third oracle operates at the worker level: for each source segment, we choose from the translations the one provided by the worker whose translations (over all sentences) score the highest on average. The fourth oracle also operates at the worker level, but selects from sentences produced by translator/post-editor collaborations. These oracle methods represent ideal solutions under our scenario. We also examine two voting-inspired methods. The first method selects the translation with the minimum average TER (Snover et al., 2006) against the other translations; intuitively, this would represent the “consensus” translation. The second method selects the translation generated by the Turker who, on average, provides translations with the minimum average TER. Results A summary of our results in given in Table 2. As expected, random selection yields bad performance, with a BLEU score of 30.52. The oracles indicate that there is usually an acceptable translation from the Turkers for any given sentence. Since the oracles select from a small group of only 4 translations per source segment, they are not overly optimistic, and rather reflect the true potential of the collected translations. On average, the reference translations give a score of 42.38. To put this in perspective, the output of a state-of-theFigure 5: Effect of candidate-Turker coupling (λ) on BLEU score. art machine translation system (the syntax-based variant of Joshua) achieves a score of 26.91, which is reported in (Zaidan and Callison-Burch, 2011). The approach which selects the translations with the minimum average TER (Snover et al., 2006) against the other three translations (the “consensus” translation) achieves BLEU scores of 35.78. Using the raw translations without post-editing, our graph-based ranking method achieves a BLEU score of 38.89, compared to Zaidan and CallisonBurch (2011)’ s reported score of 28.13, which they achieved using a linear feature-based classification. Their linear classifier achieved a reported score of 39.062 when combining information from both translators and editors. In contrast, our proposed graph-based ranking framework achieves a score of 41.43 when using the same information. This boost in BLEU score confirms our intuition that the hidden collaboration networks between candidate translations and transltor/editor pairs are indeed useful. Parameter Tuning There are two parameters in our experimental setups: µ controls the probability of starting a new random walk and λ controls the coupling between the candidate and Turker subgraphs. We set the damping factor µ to 0.85, following the standard PageRank paradigm. In order to determine a value for λ, we used the average BLEU, computed against the professional refer2Note that the data we used in our experiments are slightly different, by discarding nearly 100 NULL sentences in the raw data. We do not re-implement this baseline but report the results from the paper directly. According to our experiments, most of the results generated by baselines and oracles are very close to the previously reported values. 1141 Plain ranking 38.89 w/o collaboration 38.88 Shared translator 41.38 Shared post-editor 41.29 Shared Turker 41.43 Table 3: Variations of all component settings. ence translations, as a tuning metric. We experimented with values of λ ranging from 0 to 1, with a step size of 0.05 (Figure 5). Small λ values place little emphasis on the candidate/Turker coupling, whereas larger values rely more heavily on the coranking. Overall, we observed better performance with values within the range of 0.05-0.15. This suggests that both sources of information– the candidate itself and its authors– are important for the crowdsourcing translation task. In all of our reported results, we used the λ = 0.1. Analysis We examine the relative contribution of each component of our approach on the overall performance. We first examine the centroid-based ranking on the candidate sub-graph (GC) alone to see the effect of voting among translated sentences; we denote this strategy as plain ranking. Then we incorporate the standard random walk on the Turker graph (GT ) to include the structural information but without yet including any collaboration information; that is, we incorporate information from GT and GC without including edges linking the two together. The co-ranking paradigm is exactly the same as the framework described in Section 3.2, but with simplified structures. Finally, we examine the two-step collaboration based candidate-Turker graph using several variations on edge establishment. As before, the nodes are the translator/post-editor working pairs. We investigate three settings in which 1) edges connect two nodes when they share only a translator, 2) edges connect two nodes when they share only a post-editor, and 3) edges connect two nodes when they share either a translator or a post-editor. These results are summarized in Table 3. Interestingly, we observe that when modeling the linkage between the collaboration pairs, connecting Turker pairs which share either a translator or the post-editor achieves better performance than connecting pairs that share only translators or connecting pairs which share only editors. This result supports the intuition that a denser collaboration matrix will help propagate saliency to good translators/post-editors and hence provides better predictions for candidate quality. 6 Conclusion We have proposed an algorithm for using a twostep collaboration between non-professional translators and post-editors to obtain professionalquality translations. Our method, based on a co-ranking model, selects the best crowdsourced translation from a set of candidates, and is capable of selecting translations which near professional quality. Crowdsourcing can play a pivotal role in future efforts to create parallel translation datasets. In addition to its benefits of cost and scalability, crowdsourcing provides access to languages that currently fall outside the scope of statistical machine translation research. In future work on crowdsourced translation, further benefits in quality improvement and cost reduction could stem from 1) building ground truth data sets based on high-quality Turkers’ translations and 2) identifying when sufficient data has been collected for a given input, to avoid soliciting unnecessary redundant translations. Acknowledgements This material is based on research sponsored by a DARPA Computer Science Study Panel phase 3 award entitled “Crowdsourcing Translation” (contract D12PC00368). The views and conclusions contained in this publication are those of the authors and should not be interpreted as representing official policies or endorsements by DARPA or the U.S. Government. This research was supported by the Johns Hopkins University Human Language Technology Center of Excellence and through gifts from Microsoft, Google and Facebook. References Sadaf Abdul-Rauf and Holger Schwenk. 2009. On the use of comparable corpora to improve SMT performance. In Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009), pages 16–23, March. Vamshi Ambati and Stephan Vogel. 2010. Can crowds build parallel corpora for machine translation systems? In Workshop on Creating Speech and Language Data with MTurk. 1142 Vamshi Ambati, Stephan Vogel, and Jaime G Carbonell. 2010. Active learning and crowd-sourcing for machine translation. In LREC, volume 11, pages 2169–2174. Citeseer. Vamshi Ambati. 2012. Active Learning and Crowdsourcing for Machine Translation in Low Resource Scenarios. Ph.D. thesis, Language Technologies Institute, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA. Michael S. Bernstein, Greg Little, Robert C. Miller, Bjrn Hartmann, Mark S. Ackerman, David R. Karger, David Crowell, and Katrina Panovich. 2010. Soylent: a word processor with a crowd inside. In Proceedings of the ACM Symposium on User Interface Software and Technology (UIST). Michael Bloodgood and Chris Callison-Burch. 2010. Large-scale cost-focused active learning for statistical machine translation. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Chris Callison-Burch and Mark Dredze. 2010. Creating speech and language data with Amazon’s Mechanical Turk. In Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk, pages 1–12, June. Chris Callison-Burch. 2005. Linear B system description for the 2005 NIST MT evaluation exercise. In Proceedings of Machine Translation Evaluation Workshop. Chris Callison-Burch. 2009. Fast, cheap, and creative: Evaluating translation quality using amazon’s mechanical turk. In Proceedings of EMNLP. David L. Chen and William B. Dolan. 2012. Building a persistent workforce on mechanical turk for multilingual data collection. In Proceedings of the Human Computer Interaction International Conference. Ulrich Germann. 2001. Building a statistical machine translation system from scratch: How much bang for the buck can we expect? In Proceedings of the Workshop on Data-driven Methods in Machine Translation - Volume 14, DMMT ’01, pages 1–8. Spence Green, Jeffrey Heer, and Christopher D. Manning. 2013. The efficacy of human post-editing for language translation. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’13, pages 439–448. Juan Pablo Hourcade, Benjamin B Bederson, Allison Druin, Anne Rose, Allison Farber, and Yoshifumi Takayama. 2003. The international children’s digital library: viewing digital books online. Interacting with Computers, 15(2):151–167. Chang Hu, Benjamin B. Bederson, and Philip Resnik. 2010. Translation by iterative collaboration between monolingual users. In Proceedings of ACM SIGKDD Workshop on Human Computation (HCOMP). Chang Hu, Benjamin B. Bederson, Philip Resnik, and Yakov Kronrod. 2011. Monotrans2: A new human computation system to support monolingual translation. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’11, pages 1133–1136. Chang Hu. 2009. Collaborative translation by monolingual users. In CHI ’09 Extended Abstracts on Human Factors in Computing Systems, CHI EA ’09, pages 3105–3108. Martin Kay. 1998. The proper place of men and machines in language translation. Machine Translation, 12(1/2):3–23, January. Kevin Knight and Ishwar Chander. 1994. Automated postediting of documents. In In Proceedings of AAAI. Philipp Koehn. 2010. Enabling monolingual translators: Post-editing vs. options. In HLT-NAACL’10, pages 537–545, June. Amy N Langville and Carl D Meyer. 2004. Deeper inside pagerank. Internet Mathematics, 1(3):335– 380. Zhifei Li, Chris Callison-Burch, Chris Dyer, Juri Ganitkevitch, Ann Irvine, Sanjeev Khudanpur, Lane Schwartz, Wren Thornton, Ziyuan Wang, Jonathan Weese, and Omar Zaidan. 2010. Joshua 2.0: A toolkit for parsing-based machine translation with syntax, semirings, discriminative training and other goodies. In Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR, pages 133–137, July. Annie Louis and Ani Nenkova. 2013. What makes writing great? first experiments on article quality prediction in the science journalism domain. Transactions of Association for Computational Linguistics. Christopher D Manning, Prabhakar Raghavan, and Hinrich Sch¨utze. 2008. Introduction to information retrieval, volume 1. Cambridge University Press Cambridge. Daisuke Morita and Toru Ishida. 2009a. Collaborative translation by monolinguals with machine translators. In Proceedings of the 14th International Conference on Intelligent User Interfaces, IUI ’09, pages 361–366. Daisuke Morita and Toru Ishida. 2009b. Designing protocols for collaborative translation. In International Conference on Principles of Practice in MultiAgent Systems (PRIMA-09), pages 17–32. Springer. 1143 Dragos Stefan Munteanu and Daniel Marcu. 2005. Improving machine translation performance by exploiting non-parallel corpora. Comput. Linguist., 31(4):477–504, December. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL ’02, pages 311–318. Ellie Pavlick, Matt Post, Ann Irvine, Dmitry Kachaev, and Chris Callison-Burch. 2014. The language demographics of Amazon Mechanical Turk. Transactions of the Association for Computational Linguistics (TACL), 2(Feb):79–92. Matt Post, Chris Callison-Burch, and Miles Osborne. 2012. Constructing parallel corpora for six Indian languages via crowdsourcing. In Proceedings of the Seventh Workshop on Statistical Machine Translation. Philip Resnik and Noah A. Smith. 2003. The web as a parallel corpus. Computational Linguistics, 29(3):349–380, September. Jason R. Smith, Chris Quirk, and Kristina Toutanova. 2010. Extracting parallel sentences from comparable corpora using document level alignment. In HLT-NAACL’10, pages 403–411. Jason Smith, Herve Saint-Amand, Magdalena Plamada, Philipp Koehn, Chris Callison-Burch, and Adam Lopez. 2013. Dirt cheap web-scale parallel text from the Common Crawl. In Proceedings of the 2013 Conference of the Association for Computational Linguistics (ACL 2013), July. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of association for machine translation in the Americas, pages 223–231. Rion Snow, Brendan O’Connor, Daniel Jurafsky, and Andrew Y. Ng. 2008. Cheap and fast—but is it good?: Evaluating non-expert annotations for natural language tasks. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ’08, pages 254–263. TechCrunch. 2008. Facebook taps users to create translated versions of site, January. Jakob Uszkoreit, Jay M. Ponte, Ashok C. Popat, and Moshe Dubiner. 2010. Large scale parallel document mining for machine translation. In Proceedings of the 23rd International Conference on Computational Linguistics, COLING ’10, pages 1101– 1109. Luis von Ahn. 2013. Duolingo: Learn a language for free while helping to translate the web. In Proceedings of the 2013 International Conference on Intelligent User Interfaces, IUI ’13, pages 1–2. Rui Yan, Liang Kong, Congrui Huang, Xiaojun Wan, Xiaoming Li, and Yan Zhang. 2011a. Timeline generation through evolutionary trans-temporal summarization. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ’11, pages 433–443. Rui Yan, Xiaojun Wan, Jahna Otterbacher, Liang Kong, Xiaoming Li, and Yan Zhang. 2011b. Evolutionary timeline summarization: A balanced optimization framework via iterative substitution. In Proceedings of the 34th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’11, pages 745–754. Rui Yan, Mirella Lapata, and Xiaoming Li. 2012a. Tweet recommendation with graph co-ranking. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers - Volume 1, ACL ’12, pages 516–525. Rui Yan, Xiaojun Wan, Mirella Lapata, Wayne Xin Zhao, Pu-Jen Cheng, and Xiaoming Li. 2012b. Visualizing timelines: Evolutionary summarization via iterative reinforcement between text and image streams. In Proceedings of the 21st ACM International Conference on Information and Knowledge Management, CIKM ’12, pages 275–284. Rui Yan, Zi Yuan, Xiaojun Wan, Yan Zhang, and Xiaoming Li. 2012c. Hierarchical graph summarization: leveraging hybrid information through visible and invisible linkage. In PAKDD’12, pages 97–108. Springer. Omar F. Zaidan and Chris Callison-Burch. 2011. Crowdsourcing translation: Professional quality from non-professionals. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 1220–1229. Rabih Zbib, Erika Malchiodi, Jacob Devlin, David Stallard, Spyros Matsoukas, Richard Schwartz, John Makhoul, Omar F. Zaidan, and Chris CallisonBurch. 2012. Machine translation of Arabic dialects. In The 2012 Conference of the North American Chapter of the Association for Computational Linguistics. Rabih Zbib, Gretchen Markiewicz, Spyros Matsoukas, Richard Schwartz, and John Makhoul. 2013. Systematic comparison of professional and crowdsourced reference translations for machine translation. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Atlanta, Georgia. Xin Wayne Zhao, Yanwei Guo, Rui Yan, Yulan He, and Xiaoming Li. 2013. Timeline generation with social attention. In Proceedings of the 36th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’13, pages 1061–1064. 1144
2014
107
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1145–1154, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics A Generalized Language Model as the Combination of Skipped n-grams and Modified Kneser-Ney Smoothing Rene Pickhardt, Thomas Gottron, Martin K¨orner, Steffen Staab Institute for Web Science and Technologies, University of Koblenz-Landau, Germany {rpickhardt,gottron,mkoerner,staab}@uni-koblenz.de Paul Georg Wagner and Till Speicher Typology GbR [email protected] Abstract We introduce a novel approach for building language models based on a systematic, recursive exploration of skip n-gram models which are interpolated using modified Kneser-Ney smoothing. Our approach generalizes language models as it contains the classical interpolation with lower order models as a special case. In this paper we motivate, formalize and present our approach. In an extensive empirical experiment over English text corpora we demonstrate that our generalized language models lead to a substantial reduction of perplexity between 3.1% and 12.7% in comparison to traditional language models using modified Kneser-Ney smoothing. Furthermore, we investigate the behaviour over three other languages and a domain specific corpus where we observed consistent improvements. Finally, we also show that the strength of our approach lies in its ability to cope in particular with sparse training data. Using a very small training data set of only 736 KB text we yield improvements of even 25.7% reduction of perplexity. 1 Introduction motivation Language Models are a probabilistic approach for predicting the occurrence of a sequence of words. They are used in many applications, e.g. word prediction (Bickel et al., 2005), speech recognition (Rabiner and Juang, 1993), machine translation (Brown et al., 1990), or spelling correction (Mays et al., 1991). The task language models attempt to solve is the estimation of a probability of a given sequence of words wl 1 = w1, . . . , wl. The probability P(wl 1) of this sequence can be broken down into a product of conditional probabilities: P(wl 1) =P(w1) · P(w2|w1) · . . . · P(wl|w1 · · · wl−1) = lY i=1 P(wi|w1 · · · wi−1) (1) Because of combinatorial explosion and data sparsity, it is very difficult to reliably estimate the probabilities that are conditioned on a longer subsequence. Therefore, by making a Markov assumption the true probability of a word sequence is only approximated by restricting conditional probabilities to depend only on a local context wi−1 i−n+1 of n −1 preceding words rather than the full sequence wi−1 1 . The challenge in the construction of language models is to provide reliable estimators for the conditional probabilities. While the estimators can be learnt—using, e.g., a maximum likelihood estimator over n-grams obtained from training data—the obtained values are not very reliable for events which may have been observed only a few times or not at all in the training data. Smoothing is a standard technique to overcome this data sparsity problem. Various smoothing approaches have been developed and applied in the context of language models. Chen and Goodman (Chen and Goodman, 1999) introduced modified Kneser-Ney Smoothing, which up to now has been considered the state-of-theart method for language modelling over the last 15 years. Modified Kneser-Ney Smoothing is an interpolating method which combines the estimated conditional probabilities P(wi|wi−1 i−n+1) recursively with lower order models involving a shorter local context wi−1 i−n+2 and their estimate for P(wi|wi−1 i−n+2). The motivation for using lower order models is that shorter contexts may be observed more often and, thus, suffer less from data sparsity. However, a single rare word towards the end of the local context will always cause the context to be observed rarely in the training data and hence will lead to an unreliable estimation. 1145 Because of Zipfian word distributions, most words occur very rarely and hence their true probability of occurrence may be estimated only very poorly. One word that appears at the end of a local context wi−1 i−n+1 and for which only a poor approximation exists may adversely affect the conditional probabilities in language models of all lengths — leading to severe errors even for smoothed language models. Thus, the idea motivating our approach is to involve several lower order models which systematically leave out one position in the context (one may think of replacing the affected word in the context with a wildcard) instead of shortening the sequence only by one word at the beginning. This concept of introducing gaps in n-grams is referred to as skip n-grams (Ney et al., 1994; Huang et al., 1993). Among other techniques, skip n-grams have also been considered as an approach to overcome problems of data sparsity (Goodman, 2001). However, to best of our knowledge, language models making use of skip n-grams models have never been investigated to their full extent and over different levels of lower order models. Our approach differs as we consider all possible combinations of gaps in a local context and interpolate the higher order model with all possible lower order models derived from adding gaps in all different ways. In this paper we make the following contributions: 1. We provide a framework for using modified Kneser-Ney smoothing in combination with a systematic exploration of lower order models based on skip n-grams. 2. We show how our novel approach can indeed easily be interpreted as a generalized version of the current state-of-the-art language models. 3. We present a large scale empirical analysis of our generalized language models on eight data sets spanning four different languages, namely, a wikipedia-based text corpus and the JRC-Acquis corpus of legislative texts. 4. We empirically observe that introducing skip n-gram models may reduce perplexity by 12.7% compared to the current state-of-theart using modified Kneser-Ney models on large data sets. Using small training data sets we observe even higher reductions of perplexity of up to 25.6%. The rest of the paper is organized as follows. We start with reviewing related work in Section 2. We will then introduce our generalized language models in Section 3. After explaining the evaluation methodology and introducing the data sets in Section 4 we will present the results of our evaluation in Section 5. In Section 6 we discuss why a generalized language model performs better than a standard language model. Finally, in Section 7 we summarize our findings and conclude with an overview of further interesting research challenges in the field of generalized language models. 2 Related Work Work related to our generalized language model approach can be divided in two categories: various smoothing techniques for language models and approaches making use of skip n-grams. Smoothing techniques for language models have a long history. Their aim is to overcome data sparsity and provide more reliable estimators—in particular for rare events. The Good Turing estimator (Good, 1953), deleted interpolation (Jelinek and Mercer, 1980), Katz backoff (Katz, 1987) and Kneser-Ney smoothing (Kneser and Ney, 1995) are just some of the approaches to be mentioned. Common strategies of these approaches are to either backoff to lower order models when a higher order model lacks sufficient training data for good estimation, to interpolate between higher and lower order models or to interpolate with a prior distribution. Furthermore, the estimation of the amount of unseen events from rare events aims to find the right weights for interpolation as well as for discounting probability mass from unreliable estimators and to retain it for unseen events. The state of the art is a modified version of Kneser-Ney smoothing introduced in (Chen and Goodman, 1999). The modified version implements a recursive interpolation with lower order models, making use of different discount values for more or less frequently observed events. This variation has been compared to other smoothing techniques on various corpora and has shown to outperform competing approaches. We will review modified Kneser-Ney smoothing in Section 2.1 in more detail as we reuse some ideas to define our generalized language model. 1146 Smoothing techniques which do not rely on using lower order models involve clustering (Brown et al., 1992; Ney et al., 1994), i.e. grouping together similar words to form classes of words, as well as skip n-grams (Ney et al., 1994; Huang et al., 1993). Yet other approaches make use of permutations of the word order in n-grams (SchukatTalamazzini et al., 1995; Goodman, 2001). Skip n-grams are typically used to incorporate long distance relations between words. Introducing the possibility of gaps between the words in an n-gram allows for capturing word relations beyond the level of n consecutive words without an exponential increase in the parameter space. However, with their restriction on a subsequence of words, skip n-grams are also used as a technique to overcome data sparsity (Goodman, 2001). In related work different terminology and different definitions have been used to describe skip n-grams. Variations modify the number of words which can be skipped between elements in an n-gram as well as the manner in which the skipped words are determined (e.g. fixed patterns (Goodman, 2001) or functional words (Gao and Suzuki, 2005)). The impact of various extensions and smoothing techniques for language models is investigated in (Goodman, 2001; Goodman, 2000). In particular, the authors compared Kneser-Ney smoothing, Katz backoff smoothing, caching, clustering, inclusion of higher order n-grams, sentence mixture and skip n-grams. They also evaluated combinations of techniques, for instance, using skip n-gram models in combination with Kneser-Ney smoothing. The experiments in this case followed two paths: (1) interpolating a 5-gram model with lower order distribution introducing a single gap and (2) interpolating higher order models with skip n-grams which retained only combinations of two words. Goodman reported on small data sets and in the best case a moderate improvement of cross entropy in the range of 0.02 to 0.04. In (Guthrie et al., 2006), the authors investigated the increase of observed word combinations when including skips in n-grams. The conclusion was that using skip n-grams is often more effective for increasing the number of observations than increasing the corpus size. This observation aligns well with our experiments. 2.1 Review of Modified Kneser-Ney Smoothing We briefly recall modified Kneser-Ney Smoothing as presented in (Chen and Goodman, 1999). Modified Kneser-Ney implements smoothing by interpolating between higher and lower order n-gram language models. The highest order distribution is interpolated with lower order distribution as follows: PMKN(wi|wi−1 i−n+1) = max{c(wi i−n+1) −D(c(wi i−n+1)), 0} c(wi−1 i−n+1) + γhigh(wi−1 i−n+1) ˆPMKN(wi|wi−1 i−n+2) (2) where c(wi i−n+1) provides the frequency count that sequence wi i−n+1 occurs in training data, D is a discount value (which depends on the frequency of the sequence) and γhigh depends on D and is the interpolation factor to mix in the lower order distribution1. Essentially, interpolation with a lower order model corresponds to leaving out the first word in the considered sequence. The lower order models are computed differently using the notion of continuation counts rather than absolute counts: ˆPMKN(wi|(wi−1 i−n+1)) = max{N1+(•wi i−n+1) −D(c(wi i−n+1)), 0} N1+(•wi−1 i−n+1•) + γmid(wi−1 i−n+1) ˆPMKN(wi|wi−1 i−n+2)) (3) where the continuation counts are defined as N1+(•wi i−n+1) = |{wi−n : c(wi i−n) > 0}|, i.e. the number of different words which precede the sequence wi i−n+1. The term γmid is again an interpolation factor which depends on the discounted probability mass D in the first term of the formula. 3 Generalized Language Models 3.1 Notation for Skip n-gram with k Skips We express skip n-grams using an operator notation. The operator ∂i applied to an n-gram removes the word at the i-th position. For instance: ∂3w1w2w3w4 = w1w2 w4, where is used as wildcard placeholder to indicate a removed word. The wildcard operator allows for 1The factors γ and D are quite technical and lengthy. As they do not play a significant role for understanding our novel approach we refer to Appendix A for details. 1147 larger number of matches. For instance, when c(w1w2w3aw4) = x and c(w1w2w3bw4) = y then c(w1w2 w4) ≥x + y since at least the two sequences w1w2w3aw4 and w1w2w3bw4 match the sequence w1w2 w4. In order to align with standard language models the skip operator applied to the first word of a sequence will remove the word instead of introducing a wildcard. In particular the equation ∂1wi i−n+1 = wi i−n+2 holds where the right hand side is the subsequence of wi i−n+1 omitting the first word. We can thus formulate the interpolation step of modified Kneser-Ney smoothing using our notation as ˆPMKN(wi|wi−1 i−n+2) = ˆPMKN(wi|∂1wi−1 i−n+1). Thus, our skip n-grams correspond to n-grams of which we only use k words, after having applied the skip operators ∂i1 . . . ∂in−k 3.2 Generalized Language Model Interpolation with lower order models is motivated by the problem of data sparsity in higher order models. However, lower order models omit only the first word in the local context, which might not necessarily be the cause for the overall n-gram to be rare. This is the motivation for our generalized language models to not only interpolate with one lower order model, where the first word in a sequence is omitted, but also with all other skip ngram models, where one word is left out. Combining this idea with modified Kneser-Ney smoothing leads to a formula similar to (2). PGLM(wi|wi−1 i−n+1) = max{c(wi i−n+1) −D(c(wi i−n+1)), 0} c(wi−1 i−n+1) + γhigh(wi−1 i−n+1) n−1 X j=1 1 n−1 ˆPGLM(wi|∂jwi−1 i−n+1) (4) The difference between formula (2) and formula (4) is the way in which lower order models are interpolated. Note, the sum over all possible positions in the context wi−1 i−n+1 for which we can skip a word and the according lower order models PGLM(wi|∂j(wi−1 i−n+1)). We give all lower order models the same weight 1 n−1. The same principle is recursively applied in the lower order models in which some words of the full n-gram are already skipped. As in modified Kneser-Ney smoothing we use continuation counts for the lower order models, incorporating the skip operator also for these counts. Incorporating this directly into modified Kneser-Ney smoothing leads in the second highest model to: ˆPGLM(wi|∂j(wi−1 i−n+1)) = (5) max{N1+(∂j(wi i−n)) −D(c(∂j(wi i−n+1))), 0} N1+(∂j(wi−1 i−n+1)•) +γmid(∂j(wi−1 i−n+1)) n−1 X k=1 k̸=j 1 n−2 ˆPGLM(wi|∂j∂k(wi−1 i−n+1)) Given that we skip words at different positions, we have to extend the notion of the count function and the continuation counts. The count function applied to a skip n-gram is given by c(∂j(wi i−n))= P wj c(wi i−n), i.e. we aggregate the count information over all words which fill the gap in the ngram. Regarding the continuation counts we define: N1+(∂j(wi i−n)) = |{wi−n+j−1 :c(wi i−n)>0}| (6) N1+(∂j(wi−1 i−n)•) = |{(wi−n+j−1, wi):c(wi i−n)>0}| (7) As lowest order model we use—just as done for traditional modified Kneser-Ney (Chen and Goodman, 1999)—a unigram model interpolated with a uniform distribution for unseen words. The overall process is depicted in Figure 1, illustrating how the higher level models are recursively smoothed with several lower order ones. 4 Experimental Setup and Data Sets To evaluate the quality of our generalized language models we empirically compare their ability to explain sequences of words. To this end we use text corpora, split them into test and training data, build language models as well as generalized language models over the training data and apply them on the test data. We employ established metrics, such as cross entropy and perplexity. In the following we explain the details of our experimental setup. 4.1 Data Sets For evaluation purposes we employed eight different data sets. The data sets cover different domains and languages. As languages we considered English (en), German (de), French (fr), and Italian (it). As general domain data set we used the full collection of articles from Wikipedia (wiki) in the corresponding languages. The download dates of the dumps are displayed in Table 1. 1148 Figure 1: Interpolation of models of different order and using skip patterns. The value of n indicates the length of the raw n-grams necessary for computing the model, the value of k indicates the number of words actually used in the model. The wild card symbol marks skipped words in an n-gram. The arrows indicate how a higher order model is interpolated with lower order models which skips one word. The bold arrows correspond to interpolation of models in traditional modified Kneser-Ney smoothing. The lighter arrows illustrate the additional interpolations introduced by our generalized language models. de en fr it Nov 22nd Nov 04th Nov 20th Nov 25th Table 1: Download dates of Wikipedia snapshots in November 2013. Special purpose domain data are provided by the multi-lingual JRC-Acquis corpus of legislative texts (JRC) (Steinberger et al., 2006). Table 2 gives an overview of the data sets and provides some simple statistics of the covered languages and the size of the collections. Statistics Corpus total words unique words in Mio. in Mio. wiki-de 579 9.82 JRC-de 30.9 0.66 wiki-en 1689 11.7 JRC-en 39.2 0.46 wiki-fr 339 4.06 JRC-fr 35.8 0.46 wiki-it 193 3.09 JRC-it 34.4 0.47 Table 2: Word statistics and size of of evaluation corpora The data sets come in the form of structured text corpora which we cleaned from markup and tokenized to generate word sequences. We filtered the word tokens by removing all character sequences which did not contain any letter, digit or common punctuation marks. Eventually, the word token sequences were split into word sequences of length n which provided the basis for the training and test sets for all algorithms. Note that we did not perform case-folding nor did we apply stemming algorithms to normalize the word forms. Also, we did our evaluation using case sensitive training and test data. Additionally, we kept all tokens for named entities such as names of persons or places. 4.2 Evaluation Methodology All data sets have been randomly split into a training and a test set on a sentence level. The training sets consist of 80% of the sentences, which have been used to derive n-grams, skip n-grams and corresponding continuation counts for values of n between 1 and 5. Note that we have trained a prediction model for each data set individually. From the remaining 20% of the sequences we have randomly sampled a separate set of 100, 000 sequences of 5 words each. These test sequences have also been shortened to sequences of length 3, and 4 and provide a basis to conduct our final experiments to evaluate the performance of the different algorithms. We learnt the generalized language models on the same split of the training corpus as the standard language model using modified Kneser-Ney smoothing and we also used the same set of test sequences for a direct comparison. To ensure rigour and openness of research the data set for training as well as the test sequences and the entire source code is open source. 2 3 4 We compared the probabilities of our language model implementation (which is a subset of the generalized language model) using KN as well as MKN smoothing with the Kyoto Language Model Toolkit 5. Since we got the same results for small n and small data sets we believe that our implementation is correct. In a second experiment we have investigated the impact of the size of the training data set. The wikipedia corpus consists of 1.7 bn. words. 2http://west.uni-koblenz.de/Research 3https://github.com/renepickhardt/generalized-languagemodeling-toolkit 4http://glm.rene-pickhardt.de 5http://www.phontron.com/kylm/ 1149 Thus, the 80% split for training consists of 1.3 bn. words. We have iteratively created smaller training sets by decreasing the split factor by an order of magnitude. So we created 8% / 92% and 0.8% / 99.2% split, and so on. We have stopped at the 0.008%/99.992% split as the training data set in this case consisted of less words than our 100k test sequences which we still randomly sampled from the test data of each split. Then we trained a generalized language model as well as a standard language model with modified Kneser-Ney smoothing on each of these samples of the training data. Again we have evaluated these language models on the same random sample of 100, 000 sequences as mentioned above. 4.3 Evaluation Metrics As evaluation metric we use perplexity: a standard measure in the field of language models (Manning and Sch¨utze, 1999). First we calculate the cross entropy of a trained language model given a test set using H(Palg) = − X s∈T PMLE(s) · log2 Palg(s) (8) Where Palg will be replaced by the probability estimates provided by our generalized language models and the estimates of a language model using modified Kneser-Ney smoothing. PMLE, instead, is a maximum likelihood estimator of the test sequence to occur in the test corpus. Finally, T is the set of test sequences. The perplexity is defined as: Perplexity(Palg) = 2H(Palg) (9) Lower perplexity values indicate better results. 5 Results 5.1 Baseline As a baseline for our generalized language model (GLM) we have trained standard language models using modified Kneser-Ney Smoothing (MKN). These models have been trained for model lengths 3 to 5. For unigram and bigram models MKN and GLM are identical. 5.2 Evaluation Experiments The perplexity values for all data sets and various model orders can be seen in Table 3. In this table we also present the relative reduction of perplexity in comparison to the baseline. model length Experiments n = 3 n = 4 n = 5 wiki-de MKN 1074.1 778.5 597.1 wiki-de GLM 1031.1 709.4 521.5 rel. change 4.0% 8.9% 12.7% JRC-de MKN 235.4 138.4 94.7 JRC-de GLM 229.4 131.8 86.0 rel. change 2.5% 4.8% 9.2% wiki-en MKN 586.9 404 307.3 wiki-en GLM 571.6 378.1 275 rel. change 2.6% 6.1% 10.5% JRC-en MKN 147.2 82.9 54.6 JRC-en GLM 145.3 80.6 52.5 rel. change 1.3% 2.8% 3.9% wiki-fr MKN 538.6 385.9 298.9 wiki-fr GLM 526.7 363.8 272.9 rel. change 2.2% 5.7% 8.7% JRC-fr MKN 155.2 92.5 63.9 JRC-fr GLM 153.5 90.1 61.7 rel. change 1.1% 2.5% 3.5% wiki-it MKN 738.4 532.9 416.7 wiki-it GLM 718.2 500.7 382.2 rel. change 2.7% 6.0% 8.3% JRC-it MKN 177.5 104.4 71.8 JRC-it GLM 175.1 101.8 69.6 rel. change 1.3% 2.6% 3.1% Table 3: Absolute perplexity values and relative reduction of perplexity from MKN to GLM on all data sets for models of order 3 to 5 As we can see, the GLM clearly outperforms the baseline for all model lengths and data sets. In general we see a larger improvement in performance for models of higher orders (n = 5). The gain for 3-gram models, instead, is negligible. For German texts the increase in performance is the highest (12.7%) for a model of order 5. We also note that GLMs seem to work better on broad domain text rather than special purpose text as the reduction on the wiki corpora is constantly higher than the reduction of perplexity on the JRC corpora. We made consistent observations in our second experiment where we iteratively shrank the size of the training data set. We calculated the relative reduction in perplexity from MKN to GLM 1150 for various model lengths and the different sizes of the training data. The results for the English Wikipedia data set are illustrated in Figure 2. We see that the GLM performs particularly well on small training data. As the size of the training data set becomes smaller (even smaller than the evaluation data), the GLM achieves a reduction of perplexity of up to 25.7% compared to language models with modified Kneser-Ney smoothing on the same data set. The absolute perplexity values for this experiment are presented in Table 4. model length Experiments n = 3 n = 4 n = 5 80% MKN 586.9 404 307.3 80% GLM 571.6 378.1 275 rel. change 2.6% 6.5% 10.5% 8% MKN 712.6 539.8 436.5 8% GLM 683.7 492.8 382.5 rel. change 4.1% 8.7% 12.4% 0.8% MKN 894.0 730.0 614.1 0.8% GLM 838.7 650.1 528.7 rel. change 6.2% 10.9% 13.9% 0.08% MKN 1099.5 963.8 845.2 0.08% GLM 996.6 820.7 693.4 rel. change 9.4% 14.9% 18.0% 0.008% MKN 1212.1 1120.5 1009.6 0.008% GLM 1025.6 875.5 750.3 rel. change 15.4% 21.9% 25.7% Table 4: Absolute perplexity values and relative reduction of perplexity from MKN to GLM on shrunk training data sets for the English Wikipedia for models of order 3 to 5 Our theory as well as the results so far suggest that the GLM performs particularly well on sparse training data. This conjecture has been investigated in a last experiment. For each model length we have split the test data of the largest English Wikipedia corpus into two disjoint evaluation data sets. The data set unseen consists of all test sequences which have never been observed in the training data. The set observed consists only of test sequences which have been observed at least once in the training data. Again we have calculated the perplexity of each set. For reference, also the values of the complete test data set are shown in Table 5. model length Experiments n = 3 n = 4 n = 5 MKNcomplete 586.9 404 307.3 GLMcomplete 571.6 378.1 275 rel. change 2.6% 6.5% 10.5% MKNunseen 14696.8 2199.8 846.1 GLMunseen 13058.7 1902.4 714.4 rel. change 11.2% 13.5% 15.6% MKNobserved 220.2 88.0 43.4 GLMobserved 220.6 88.3 43.5 rel. change −0.16% −0.28% −0.15% Table 5: Absolute perplexity values and relative reduction of perplexity from MKN to GLM for the complete and split test file into observed and unseen sequences for models of order 3 to 5. The data set is the largest English Wikipedia corpus. As expected we see the overall perplexity values rise for the unseen test case and decline for the observed test case. More interestingly we see that the relative reduction of perplexity of the GLM over MKN increases from 10.5% to 15.6% on the unseen test case. This indicates that the superior performance of the GLM on small training corpora and for higher order models indeed comes from its good performance properties with regard to sparse training data. It also confirms that our motivation to produce lower order n-grams by omitting not only the first word of the local context but systematically all words has been fruitful. However, we also see that for the observed sequences the GLM performs slightly worse than MKN. For the observed cases we find the relative change to be negligible. 6 Discussion In our experiments we have observed an improvement of our generalized language models over classical language models using Kneser-Ney smoothing. The improvements have been observed for different languages, different domains as well as different sizes of the training data. In the experiments we have also seen that the GLM performs well in particular for small training data sets and sparse data, encouraging our initial motivation. This feature of the GLM is of particular value, as data sparsity becomes a more and more immanent problem for higher values of n. This known fact is underlined also by the statis1151 0% 5% 10% 15% 20% 25% 30% 0.1 1 10 100 1000 relative change in perplexity data set size [mio words] Relative change of perplexity for GLM over MKN MKN (baseline) for n=3,4, and 5 n=5 n=4 n=3 Figure 2: Variation of the size of the training data on 100k test sequences on the English Wikipedia data set with different model lengths for GLM. tics shown in Table 6. The fraction of total ngrams which appear only once in our Wikipedia corpus increases for higher values of n. However, for the same value of n the skip n-grams are less rare. Our generalized language models leverage this additional information to obtain more reliable estimates for the probability of word sequences. wn 1 total unique w1 0.5% 64.0% w1w2 5.1% 68.2% w1 w3 8.0% 79.9% w1 w4 9.6% 72.1% w1 w5 10.1% 72.7% w1w2w3 21.1% 77.5% w1 w3w4 28.2% 80.4% w1w2 w4 28.2% 80.7% w1 w4w5 31.7% 81.9% w1 w3 w5 35.3% 83.0% w1w2 w5 31.5% 82.2% w1w2w3w4 44.7% 85.4% w1 w3w4w5 52.7% 87.6% w1w2 w4w5 52.6% 88.0% w1w2w3 w5 52.3% 87.7% w1w2w3w4w5 64.4% 90.7% Table 6: Percentage of generalized n-grams which occur only once in the English Wikipedia corpus. Total means a percentage relative to the total amount of sequences. Unique means a percentage relative to the amount of unique sequences of this pattern in the data set. Beyond the general improvements there is an additional path for benefitting from generalized language models. As it is possible to better leverage the information in smaller and sparse data sets, we can build smaller models of competitive performance. For instance, when looking at Table 4 we observe the 3-gram MKN approach on the full training data set to achieve a perplexity of 586.9. This model has been trained on 7 GB of text and the resulting model has a size of 15 GB and 742 Mio. entries for the count and continuation count values. Looking for a GLM with comparable but better performance we see that the 5-gram model trained on 1% of the training data has a perplexity of 528.7. This GLM model has a size of 9.5 GB and contains only 427 Mio. entries. So, using a far smaller set of training data we can build a smaller model which still demonstrates a competitive performance. 7 Conclusion and Future Work 7.1 Conclusion We have introduced a novel generalized language model as the systematic combination of skip ngrams and modified Kneser-Ney smoothing. The main strength of our approach is the combination of a simple and elegant idea with an an empirically convincing result. Mathematically one can see that the GLM includes the standard language model with modified Kneser-Ney smoothing as a sub model and is consequently a real generalization. In an empirical evaluation, we have demonstrated that for higher orders the GLM outperforms MKN for all test cases. The relative improvement in perplexity is up to 12.7% for large data sets. GLMs also performs particularly well on small and sparse sets of training data. On a very 1152 small training data set we observed a reduction of perplexity by 25.7%. Our experiments underline that the generalized language models overcome in particular the weaknesses of modified Kneser-Ney smoothing on sparse training data. 7.2 Future work A desirable extension of our current definition of GLMs will be the combination of different lower lower order models in our generalized language model using different weights for each model. Such weights can be used to model the statistical reliability of the different lower order models. The value of the weights would have to be chosen according to the probability or counts of the respective skip n-grams. Another important step that has not been considered yet is compressing and indexing of generalized language models to improve the performance of the computation and be able to store them in main memory. Regarding the scalability of the approach to very large data sets we intend to apply the Map Reduce techniques from (Heafield et al., 2013) to our generalized language models in order to have a more scalable calculation. This will open the path also to another interesting experiment. Goodman (Goodman, 2001) observed that increasing the length of n-grams in combination with modified Kneser-Ney smoothing did not lead to improvements for values of n beyond 7. We believe that our generalized language models could still benefit from such an increase. They suffer less from the sparsity of long n-grams and can overcome this sparsity when interpolating with the lower order skip n-grams while benefiting from the larger context. Finally, it would be interesting to see how applications of language models—like next word prediction, machine translation, speech recognition, text classification, spelling correction, e.g.— benefit from the better performance of generalized language models. Acknowledgements We would like to thank Heinrich Hartmann for a fruitful discussion regarding notation of the skip operator for n-grams. The research leading to these results has received funding from the European Community’s Seventh Framework Programme (FP7/2007-2013), REVEAL (Grant agree number 610928). References Steffen Bickel, Peter Haider, and Tobias Scheffer. 2005. Predicting sentences using n-gram language models. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, HLT ’05, pages 193–200, Stroudsburg, PA, USA. Association for Computational Linguistics. Peter F Brown, John Cocke, Stephen A Della Pietra, Vincent J Della Pietra, Fredrick Jelinek, John D Lafferty, Robert L Mercer, and Paul S Roossin. 1990. A statistical approach to machine translation. Computational linguistics, 16(2):79–85. Peter F. Brown, Peter V. deSouza, Robert L. Mercer, Vincent J. Della Pietra, and Jenifer C. Lai. 1992. Class-based n-gram models of natural language. Comput. Linguist., 18(4):467–479, December. Stanley Chen and Joshua Goodman. 1998. An empirical study of smoothing techniques for language modeling. Technical report, TR-10-98, Harvard University, August. Stanley Chen and Joshua Goodman. 1999. An empirical study of smoothing techniques for language modeling. Computer Speech & Language, 13(4):359–393. Jianfeng Gao and Hisami Suzuki. 2005. Long distance dependency in language modeling: An empirical study. In Keh-Yih Su, Junichi Tsujii, JongHyeok Lee, and OiYee Kwong, editors, Natural Language Processing IJCNLP 2004, volume 3248 of Lecture Notes in Computer Science, pages 396– 405. Springer Berlin Heidelberg. Irwin J. Good. 1953. The population frequencies of species and the estimation of population parameters. Biometrika, 40(3-4):237–264. Joshua T. Goodman. 2000. Putting it all together: language model combination. In Acoustics, Speech, and Signal Processing, 2000. ICASSP ’00. Proceedings. 2000 IEEE International Conference on, volume 3, pages 1647–1650 vol.3. Joshua T. Goodman. 2001. A bit of progress in language modeling – extended version. Technical Report MSR-TR-2001-72, Microsoft Research. David Guthrie, Ben Allison, Wei Liu, Louise Guthrie, and York Wilks. 2006. A closer look at skipgram modelling. In Proceedings LREC’2006, pages 1222–1225. Kenneth Heafield, Ivan Pouzyrevsky, Jonathan H. Clark, and Philipp Koehn. 2013. Scalable modified kneser-ney language model estimation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. 1153 Xuedong Huang, Fileno Alleva, Hsiao-Wuen Hon, Mei-Yuh Hwang, Kai-Fu Lee, and Ronald Rosenfeld. 1993. The sphinx-ii speech recognition system: an overview. Computer Speech & Language, 7(2):137 – 148. F. Jelinek and R.L. Mercer. 1980. Interpolated estimation of markov source parameters from sparse data. In Proceedings of the Workshop on Pattern Recognition in Practice, pages 381–397. S. Katz. 1987. Estimation of probabilities from sparse data for the language model component of a speech recognizer. Acoustics, Speech and Signal Processing, IEEE Transactions on, 35(3):400–401. Reinhard Kneser and Hermann Ney. 1995. Improved backing-off for m-gram language modeling. In Acoustics, Speech, and Signal Processing, 1995. ICASSP-95., 1995 International Conference on, volume 1, pages 181–184. IEEE. Christopher D. Manning and Hinrich Sch¨utze. 1999. Foundations of statistical natural language processing. MIT Press, Cambridge, MA, USA. Eric Mays, Fred J Damerau, and Robert L Mercer. 1991. Context based spelling correction. Information Processing & Management, 27(5):517–522. Hermann Ney, Ute Essen, and Reinhard Kneser. 1994. On structuring probabilistic dependences in stochastic language modelling. Computer Speech & Language, 8(1):1 – 38. Lawrence Rabiner and Biing-Hwang Juang. 1993. Fundamentals of Speech Recognition. Prentice Hall. Ernst-G¨unter Schukat-Talamazzini, R Hendrych, Ralf Kompe, and Heinrich Niemann. 1995. Permugram language models. In Fourth European Conference on Speech Communication and Technology. Ralf Steinberger, Bruno Pouliquen, Anna Widiger, Camelia Ignat, Tomaz Erjavec, Dan Tufis, and Daniel Varga. 2006. The jrc-acquis: A multilingual aligned parallel corpus with 20+ languages. In LREC’06: Proceedings of the 5th International Conference on Language Resources and Evaluation. A Discount Values and Weights in Modified Kneser Ney The discount value D(c) used in formula (2) is defined as (Chen and Goodman, 1999): D(c) =          0 if c = 0 D1 if c = 1 D2 if c = 2 D3+ if c > 2 (10) The discounting values D1, D2, and D3+ are defined as (Chen and Goodman, 1998) D1 = 1 −2Y n2 n1 (11a) D2 = 2 −3Y n3 n2 (11b) D3+ = 3 −4Y n4 n3 (11c) with Y = n1 n1+n2 and ni is the total number of ngrams which appear exactly i times in the training data. The weight γhigh(wi−1 i−n+1) is defined as: γhigh(wi−1 i−n+1) = (12) D1N1(wi−1 i−n+1•)+D2N2(wi−1 i−n+1•)+D3+N3+(wi−1 i−n+1•) c(wi−1 i−n+1) And the weight γmid(wi−1 i−n+1) is defined as: γmid(wi−1 i−n+1) = (13) D1N1(wi−1 i−n+1•)+D2N2(wi−1 i−n+1•)+D3+N3+(wi−1 i−n+1•) N1+(•wi−1 i−n+1•) where N1(wi−1 i−n+1•), N2(wi−1 i−n+1•), and N3+(wi−1 i−n+1•) are analogously defined to N1+(wi−1 i−n+1•). 1154
2014
108
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1155–1165, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics A Semiparametric Gaussian Copula Regression Model for Predicting Financial Risks from Earnings Calls William Yang Wang School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 [email protected] Zhenhao Hua School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 [email protected] Abstract Earnings call summarizes the financial performance of a company, and it is an important indicator of the future financial risks of the company. We quantitatively study how earnings calls are correlated with the financial risks, with a special focus on the financial crisis of 2009. In particular, we perform a text regression task: given the transcript of an earnings call, we predict the volatility of stock prices from the week after the call is made. We propose the use of copula: a powerful statistical framework that separately models the uniform marginals and their complex multivariate stochastic dependencies, while not requiring any prior assumptions on the distributions of the covariate and the dependent variable. By performing probability integral transform, our approach moves beyond the standard count-based bag-ofwords models in NLP, and improves previous work on text regression by incorporating the correlation among local features in the form of semiparametric Gaussian copula. In experiments, we show that our model significantly outperforms strong linear and non-linear discriminative baselines on three datasets under various settings. 1 Introduction Predicting the risks of publicly listed companies is of great interests not only to the traders and analysts on the Wall Street, but also virtually anyone who has investments in the market (Kogan et al., 2009). Traditionally, analysts focus on quantitative modeling of historical trading data. Today, even though earnings calls transcripts are abundantly available, their distinctive communicative practices (Camiciottoli, 2010), and correlations with the financial risks, in particular, future stock performances (Price et al., 2012), are not well studied in the past. Earnings calls are conference calls where a listed company discusses the financial performance. Typically, a earnings call contains two parts: the senior executives first report the operational outcomes, as well as the current financial performance, and then discuss their perspectives on the future of the company. The second part of the teleconference includes a question answering session where the floor will be open to investors, analysts, and other parties for inquiries. The question we ask is that, even though each earnings call has distinct styles, as well as different speakers and mixed formats, can we use earnings calls to predict the financial risks of the company in the limited future? Given a piece of earnings call transcript, we investigate a semiparametric approach for automatic prediction of future financial risk1. To do this, we formulate the problem as a text regression task, and use a Gaussian copula with probability integral transform to model the uniform marginals and their dependencies. Copula models (Schweizer and Sklar, 1983; Nelsen, 1999) are often used by statisticians (Genest and Favre, 2007; Liu et al., 2012; Masarotto and Varin, 2012) and economists (Chen and Fan, 2006) to study the bivariate and multivariate stochastic dependency among random variables, but they are very new to the machine learning (Ghahramani et al., 2012; Han et al., 2012; Xiang and Neville, 2013; Lopezpaz et al., 2013) and related communities (Eickhoff et al., 2013). To the best of our knowledge, even though the term “copula” is named for the resemblance to grammatical copulas in linguistics, copula models have not been explored in the NLP community. To evaluate the performance of our approach, we compare with a standard squared loss linear regression baseline, as well as strong baselines such as linear and non-linear support 1In this work, the risk is defined as the measured volatility of stock prices from the week following the earnings call teleconference. See details in Section 5. 1155 vector machines (SVMs) that are widely used in text regression tasks. By varying different experimental settings on three datasets concerning different periods of the Great Recession from 20062013, we empirically show that our approach significantly outperforms the baselines by a wide margin. Our main contributions are: • We are among the first to formally study transcripts of earnings calls to predict financial risks. • We propose a novel semiparametric Gaussian copula model for text regression. • Our results significantly outperform standard linear regression and strong SVM baselines. • By varying the number of dimensions of the covariates and the size of the training data, we show that the improvements over the baselines are robust across different parameter settings on three datasets. In the next section, we outline related work in modeling financial reports and text regression. In Section 3, the details of the semiparametric copula model are introduced. We then describe the dataset and dependent variable in this study, and the experiments are shown in Section 6. We discuss the results and findings in Section 7 and then conclude in Section 8. 2 Related Work Fung et al. (2003) are among the first to study SVM and text mining methods in the market prediction domain, where they align financial news articles with multiple time series to simulate the 33 stocks in the Hong Kong Hang Seng Index. However, text regression in the financial domain have not been explored until recently. Kogan et al. (2009) model the SEC-mandated annual reports, and performs linear SVM regression with ϵ-insensitive loss function to predict the measured volatility. Another recent study (Wang et al., 2013) uses exactly the same max-margin regression technique, but with a different focus on the financial sentiment. Using the same dataset, Tsai and Wang (2013) reformulate the regression problem as a text ranking problem. Note that all these regression studies above investigate the SEC-mandated annual reports, which are very different from the earnings calls in many aspects such as length, format, vocabulary, and genre. Most recently, Xie et al. (2013) have proposed the use of frame-level semantic features to understand financial news, but they treat the stock movement prediction problem as a binary classification task. Broadly speaking, our work is also aligned to recent studies that make use of social media data to predict the stock market (Bollen et al., 2011; Zhang et al., 2011). Despite our financial domain, our approach is more relevant to text regression. Traditional discriminative models, such as linear regression and linear SVM, have been very popular in various text regression tasks, such as predicting movie revenues from reviews (Joshi et al., 2010), understanding the geographic lexical variation (Eisenstein et al., 2010), and predicting food prices from menus (Chahuneau et al., 2012). The advantage of these models is that the estimation of the parameters is often simple, the results are easy to interpret, and the approach often yields strong performances. While these approaches have merits, they suffer from the problem of not explicitly modeling the correlations and interactions among random variables, which in some sense, corresponding to the impractical assumption of independent and identically distributed (i.i.d) of the data. For example, when bag-of-word-unigrams are present in the feature space, it is easier if one does not explicitly model the stochastic dependencies among the words, even though doing so might hurt the predictive power, while the variance from the correlations among the random variables is not explained. 3 Copula Models for Text Regression In NLP, many statistical machine learning methods that capture the dependencies among random variables, including topic models (Blei et al., 2003; Lafferty and Blei, 2005; Wang et al., 2012), always have to make assumptions with the underlying distributions of the random variables, and make use of informative priors. This might be rather restricting the expressiveness of the model in some sense (Reisinger et al., 2010). On the other hand, once such assumptions are removed, another problem arises — they might be prone to errors, and suffer from the overfitting issue. Therefore, coping with the tradeoff between expressiveness and overfitting, seems to be rather important in statistical approaches that capture stochastic dependency. Our proposed semiparametric copula regression model takes a different perspective. On one hand, copula models (Nelsen, 1999) seek to explicitly model the dependency of random variables by separating the marginals and their correlations. On the other hand, it does not make use of any as1156 sumptions on the distributions of the random variables, yet, the copula model is still expressive. This nice property essentially allows us to fuse distinctive lexical, syntactic, and semantic feature sets naturally into a single compact model. From an information-theoretic point of view (Shannon, 1948), various problems in text analytics can be formulated as estimating the probability mass/density functions of tokens in text. In NLP, many of the probabilistic text models work in the discrete space (Church and Gale, 1995; Blei et al., 2003), but our model is different: since the text features are sparse, we first perform kernel density estimates to smooth out the zeroing items, and then calculate the empirical cumulative distribution function (CDF) of the random variables. By doing this, we are essentially performing probability integral transform— an important statistical technique that moves beyond the count-based bag-of-words feature space to marginal cumulative density functions space. Last but not least, by using a parametric copula, in our case, the Gaussian copula, we reduce the computational cost from fully nonparametric methods, and explicitly model the correlations among the covariate and the dependent variable. In this section, we first briefly look at the theoretical foundations of copulas, including the Sklar’s theorem. Then we describe the proposed semiparametric Gaussian copula text regression model. The algorithmic implementation of our approach is introduced at the end of this section. 3.1 The Theory of Copula In the statistics literature, copula is widely known as a family of distribution function. The idea behind copula theory is that the cumulative distribution function (CDF) of a random vector can be represented in the form of uniform marginal cumulative distribution functions, and a copula that connects these marginal CDFs, which describes the correlations among the input random variables. However, in order to have a valid multivariate distribution function regardless of n-dimensional covariates, not every function can be used as a copula function. The central idea behind copula, therefore, can be summarize by the Sklar’s theorem and the corollary. Theorem 1 (Sklar’s Theorem (1959)) Let F be the joint cumulative distribution function of n random variables X1, X2, ..., Xn. Let the corresponding marginal cumulative distribution functions of the random variable be F1(x1), F2(x2), ..., Fn(xn). Then, if the marginal functions are continuous, there exists a unique copula C, such that F(x1, ..., xn) = C[F1(x1), ..., Fn(xn)]. (1) Furthermore, if the distributions are continuous, the multivariate dependency structure and the marginals might be separated, and the copula can be considered independent of the marginals (Joe, 1997; Parsa and Klugman, 2011). Therefore, the copula does not have requirements on the marginal distributions, and any arbitrary marginals can be combined and their dependency structure can be modeled using the copula. The inverse of Sklar’s Theorem is also true in the following: Corollary 1 If there exists a copula C : (0, 1)n and marginal cumulative distribution functions F1(x1), F2(x2), ..., Fn(xn), then C[F1(x1), ..., Fn(xn)] defines a multivariate cumulative distribution function. 3.2 Semiparametric Gaussian Copula Models The Non-Parametric Estimation We formulate the copula regression model as follows. Assume we have n random variables of text features X1, X2, ..., Xn. The problem is that text features are sparse, so we need to perform nonparametric kernel density estimation to smooth out the distribution of each variable. Let f1, f2, ..., fn be the unknown density, we are interested in deriving the shape of these functions. Assume we have m samples, the kernel density estimator can be defined as: ˆfh(x) = 1 m m X i=1 Kh(x −xi) (2) = 1 mh m X i=1 K x −xi h ! (3) Here, K(·) is the kernel function, where in our case, we use the Box kernel2 K(z): K(z) = 1 2, |z| ≤1, (4) = 0, |z| > 1. (5) Comparing to the Gaussian kernel and other kernels, the Box kernel is simple, and computationally inexpensive. The parameter h is the bandwidth for smoothing3. 2It is also known as the original Parzen windows (Parzen, 1962). 3In our implementation, we use the default h of the Box kernel in the ksdensity function in Matlab. 1157 Now, we can derive the empirical cumulative distribution functions ˆFX1( ˆf1(X1)), ˆFX2( ˆf2(X2)), ..., ˆFXn( ˆfn(Xn)) of the smoothed covariates, as well as the dependent variable y and its CDF ˆFy( ˆf(y)). The empirical cumulative distribution functions are defined as: ˆF(ν) = 1 m m X i=1 I{xi ≤ν} (6) where I{·} is the indicator function, and ν indicates the current value that we are evaluating. Note that the above step is also known as probability integral transform (Diebold et al., 1997), which allows us to convert any given continuous distribution to random variables having a uniform distribution. This is of crucial importance to modeling text data: instead of using the classic bag-ofwords representation that uses raw counts, we are now working with uniform marginal CDFs, which helps coping with the overfitting issue due to noise and data sparsity. The Parametric Copula Estimation Now that we have obtained the marginals, and then the joint distribution can be constructed by applying the copula function that models the stochastic dependencies among marginal CDFs: ˆF( ˆf1(X1), ..., ˆf1(Xn), ˆf(y)) (7) = C[ ˆFX1 ˆf1(X1)  , ..., ˆFXn ˆfn(Xn)  , ˆFy ˆfy(y)  ] (8) In this work, we apply the parametric Gaussian copula to model the correlations among the text features and the label. Assume xi is the smoothed version of random variable Xi, and y is the smoothed label, we have: F(x1, ..., xn, y) (9) = ΦΣ  Φ−1[Fx1(x1)], ..., , Φ−1[Fxn(xn)], Φ−1[Fy(y)]  (10) where ΦΣ is the joint cumulative distribution function of a multivariate Gaussian with zero mean and Σ variance. Φ−1 is the inverse CDF of a standard Gaussian. In this parametric part of the model, the parameter estimation boils down to the problem of learning the covariance matrix Σ of this Gaussian copula. In this work, we perform standard maximum likelihood estimation for the Σ matrix. To calibrate the Σ matrix, we make use of the power of randomness: using the initial Σ from MLE, we generate random samples from the Gaussian copula, and then concatenate previously generated joint of Gaussian inverse marginal CDFs with the newly generated random copula numbers, and re-estimate using MLE to derive the final adjusted Σ. Note that the final Σ matrix has to be symmetric and positive definite. Computational Complexity One important question regarding the proposed semiparametric Gaussian copula model is the corresponding computational complexity. This boils down to the estimation of the ˆΣ matrix (Liu et al., 2012): one only needs to calculate the correlation coefficients of n(n −1)/2 pairs of random variables. Christensen (2005) shows that sorting and balanced binary trees can be used to calculate the correlation coefficients with complexity of O(n log n). Therefore, the computational complexity of MLE for the proposed model is O(n log n). Efficient Approximate Inference In this regression task, in order to perform exact inference of the conditional probability distribution p(Fy(y)|Fx1(x1), ..., Fxn(xn)), one needs to solve the mean response ˆE(Fy(y)|Fx1(x1), ..., Fx1(x1)) from a joint distribution of high-dimensional Gaussian copula. Assume in the simple bivariate case of Gaussian copula regression, the covariance matrix Σ is: Σ =  Σ11 Σ12 Σ22  We can easily derive the conditional density that can be used to calculate the expected value of the CDF of the label: C(Fy(y)|Fx1(x1); Σ) = 1 |Σ22 −ΣT 12Σ−1 11 Σ12| 1 2 exp −1 2δT  [Σ22 −ΣT 12Σ−1 11 Σ12]−1 −I  δ ! (11) where δ = Φ−1[Fy(y)] −ΣT 12Σ−1 11 Φ−1[Fx1(x1)]. Unfortunately, the exact inference can be intractable in the multivariate case, and approximate inference, such as Markov Chain Monte Carlo sampling (Gelfand and Smith, 1990; Pitt et al., 2006) is often used for posterior inference. In this work, we propose an efficient sampling method to derive y given the text features — we sample Fy(y) s.t. it maximizes the joint high-dimensional Gaussian copula density: ˆ Fy(y) ≈arg max Fy(y)∈(0,1) 1 √ det Σ exp  −1 2∆T · Σ−1 −I  · ∆  (12) 1158 where ∆=      Φ−1(Fx1(x1)) ... Φ−1(Fxn(xn)) Φ−1(Fy(y))      Again, the reason why we perform approximated inference is that: exact inference in the high-dimensional Gaussian copula density is nontrivial, and might not have analytical solutions, but approximate inference using maximum density sampling from the Gaussian copula significantly relaxes the complexity of inference. Finally, to derive ˆy, the last step is to compute the inverse CDF of ˆ Fy(y). 3.3 Algorithmic Implementation The algorithmic implementation of our semiparametric Gaussian copula text regression model is shown in Algorithm 1. Basically, the algorithm can be decomposed into four parts: • Perform nonparametric Box kernel density estimates of the covariates and the dependent variable for smoothing. • Calculate the empirical cumulative distribution functions of the smoothed random variables. • Estimate the parameters (covariance Σ) of the Gaussian copula. • Infer the predicted value of the dependent variable by sampling the Gaussian copula probability density function. 4 Datasets We use three datasets4 of transcribed quarterly earnings calls from the U.S. stock market, focusing on the period of the Great Recession. The pre-2009 dataset consists of earnings calls from the period of 2006-2008, which includes calls from the beginning of economic downturn, the outbreak of the subprime mortgage crisis, and the epidemic of collapses of large financial institutions. The 2009 dataset contains earnings calls from the year of 2009, which is a period where the credit crisis spreads globally, and the Dow Jones Industrial Average hit the lowest since the beginning of the millennium. The post-2009 dataset includes earnings calls from the period of 2010 to 2013, which concerns the recovery of global economy. The detailed statistics is shown in Table 1. 4http://www.cs.cmu.edu/˜yww/data/earningscalls.zip Algorithm 1 A Semi-parametric Gaussian Copula Model Based Text Regression Algorithm Given: (1) training data (X(tr), ⃗y(tr)); (2) testing data (X(te), ⃗y(te)); Learning: for i = 1 →n dimensions do X(tr)′ i ←BoxKDE(X(tr) i , X(tr) i ); U (tr) i ←EmpiricalCDF(X(tr)′ i ); X(te)′ i ←BoxKDE(X(tr) i , X(te) i ); U (te) i ←EmpiricalCDF(X(te)′ i ); end for y(tr)′ ←BoxKDE(y(tr), y(tr)); v(tr) ←EmpiricalCDF(y(tr)′); Z(tr) ←GaussianInverseCDF([U (tr) v(tr)]); ˆΣ ←CorrelationCoefficients(Z(tr)); r ←MultiV ariateGaussianRandNum(0, ˆΣ, n); Z(tr)′ = GaussianCDF(r); ˆΣ ←CorrelationCoefficients([Z(tr) Z(tr)′]); Inference: for j = 1 →m instances do maxj ←0; ˆY ′ = 0; for k = 0.01 →1 do Z(te) ←GaussianInverseCDF([U (te) k]); pj = MultiV ariateGaussianP DF (Z(te),ˆΣ) Q n GaussianP DF (Z(te)) ; if pj ≥maxj then maxj = pj; ˆY ′ = k; end if end for end for ˆy ←InverseCDF(⃗y(tr), ˆY ′); Dataset #Calls #Companies #Types #Tokens Pre-2009 3694 2746 371.5K 28.7M 2009 3474 2178 346.2K 26.4M Post-2009 3726 2107 377.4K 28.6M Table 1: Statistics of three datasets. Types: unique words. Tokens: word tokens. Note that unlike the standard news corpora in NLP or the SEC-mandated financial report, Transcripts of earnings call is a very special genre of text. For example, the length of WSJ documents is typically one to three hundreds (Harman, 1995), but the averaged document length of our three earnings calls datasets is 7677. Depending on the amount of interactions in the question answering session, the complexities of the calls vary. This mixed form of formal statement and informal speech brought difficulties to machine learning algorithms. 5 Measuring Financial Risks Volatility is an important measure of the financial risk, and in this work, we focus on predicting the future volatility following the earnings teleconfer1159 ence call. For each earning call, we have a week of stock prices of the company after the day on which the earnings call is made. The Return of Day t is: rt = xt xt−1 −1 (13) where xt represents the share price of Day t, and the Measured Stock Volatility from Day t to t + τ: y(t,t+τ) = rPτ i=0(rt+i −¯r)2 τ (14) Using the stock prices, we can use the equations above to calculate the measured stock volatility after the earnings call, which is the standard measure of risks in finance, and the dependent variable y of our predictive task. 6 Experiments 6.1 Experimental Setup In all experiments throughout this section, we use 80-20 train/test splits on all three datasets. Feature sets: We have extracted lexical, named entity, syntactic, and frame-semantics features, most of which have been shown to perform well in previous work (Xie et al., 2013). We use the unigrams and bigrams to represent lexical features, and the Stanford partof-speech tagger (Toutanova et al., 2003) to extract the lexicalized named entity and part-of-speech features. A probabilistic frame-semantics parser, SEMAFOR (Das et al., 2010), is used to provide the FrameNet-style frame-level semantic annotations. For each of the five sets, we collect the top100 most frequent features, and end up with a total of 500 features. Baselines: The baselines are standard squared-loss linear regression, linear kernel SVM, and non-linear (Gaussian) kernel SVM. They are all standard algorithms in regression problems, and have been shown to have outstanding performances in many recent text regression (Kogan et al., 2009; Chahuneau et al., 2012; Xie et al., 2013; Wang et al., 2013; Tsai and Wang, 2013). We use the Statistical Toolbox’s linear regression implementation in Matlab, and LibSVM (Chang and Lin, 2011) for training and testing the SVM models. The hyperparameter C in linear SVM, and the γ and C hyperparameters in Gaussian SVM are tuned on the training set using 10-fold crossvalidation. Note that since the kernel density estimation in the proposed copula model is nonparametric, and we only need to learn the Σ in the Gaussian copula, there is no hyperparameters that need to be tuned. Evaluation Metrics: Spearman’s correlation (Hogg and Craig, 1994) and Kendall’s tau (Kendall, 1938) have been widely used in many regression problems in NLP (Albrecht and Hwa, 2007; Yogatama et al., 2011; Wang et al., 2013; Tsai and Wang, 2013), and here we use them to measure the quality of predicted values ˆy by comparing to the vector of ground truth y. In contrast to Pearson’s correlation, Spearman’s correlation has no assumptions on the relationship of the two measured variables. Kendall’s tau is a nonparametric statistical metric that have shown to be inexpensive, robust, and representation independent (Lapata, 2006). We also use paired two-tailed t-test to measure the statistical significance between the best and the second best approaches. 6.2 Comparing to Various Baselines In the first experiment, we compare the proposed semiparametric Gaussian copula regression model to three baselines on three datasets with all features. The detailed results are shown in Table 2. On the pre-2009 dataset, we see that the linear regression and linear SVM perform reasonably well, but the Gaussian kernel SVM performs less well, probably due to overfitting. The copula model outperformed all three baselines by a wide margin on this dataset with both metrics. Similar performances are also obtained in the 2009 dataset, where the result of linear SVM baseline falls behind. On the post-2009 dataset, none of results from the linear and non-linear SVM models can match up with the linear regression model, but our proposed copula model still improves over all baselines by a large margin. Comparing to secondbest approaches, all improvements obtained by the copula model are statistically significant. 6.3 Varying the Amount of Training Data To understand the learning curve of our proposed copula regression model, we use the 25%, 50%, 75% subsets from the training data, and evaluate all four models. Figure 1 shows the evaluation results. From the experiments on the pre-2009 dataset, we see that when the amount of training data is small (25%), both SVM models have obtained very impressive results. This is not surprising at all, because as max-margin models, softmargin SVM only needs a handful of examples that come with nonvanishing coefficients (support vectors) to find a reasonable margin. When in1160 Method Pre-2009 2009 Post-2009 Spearman Kendall Spearman Kendall Spearman Kendall linear regression: 0.377 0.259 0.367 0.252 0.314 0.216 linear SVM: 0.364 0.249 0.242 0.167 0.132 0.091 Gaussian SVM: 0.305 0.207 0.280 0.192 0.152 0.104 Gaussian copula: 0.425* 0.315* 0.422* 0.310* 0.375* 0.282* Table 2: Comparing the learning algorithms on three datasets with all features. The best result is highlighted in bold. * indicates p < .001 comparing to the second best result. Figure 1: Varying the amount of training data. Left column: pre-2009 dataset. Middle column: 2009 dataset. Right column: post-2009 dataset. Top row: Spearman’s correlation. Bottom row: Kendall’s tau. creasing the amount of training data to 50%, we do see the proposed copula model catches up quickly, and lead all baseline methods undoubtably at 75% training data. On the 2009 dataset, we observe very similar patterns. Interestingly, the proposed copula regression model has dominated all methods for both metrics throughout all proportions of the “post-2009” earnings calls dataset, where instead of financial crisis, the economic recovery is the main theme. In contrast to the previous two datasets, both linear and non-linear SVMs fail to reach reasonable performances on this dataset. 6.4 Varying the Amount of Features Finally, we investigate the robustness of the proposed semiparametric Gaussian copula regression model by varying the amount of features in the covariate space. To do this, we sample equal amount of features from each feature set, and concatenate them into a feature vector. When increasing the amount of total features from 100 to 400, the results are shown in Figure 2. On the pre-2009 dataset, we see that the gaps between the bestperform copula model and the second-best linear regression model are consistent throughout all feature sizes. On the 2009 dataset, we see that the performance of Gaussian copula is aligned with the linear regression model in terms of Spearman’s correlation, where the former seems to perform better in terms of Kendall’s tau. Both linear and non-linear SVM models do not have any advantages over the proposed approach. On the post2009 dataset that concerns economic growth and recovery, the boundaries among all methods are very clear. The Spearman’s correlation for both SVM baselines is less than 0.15 throughout all settings, but copula model is able to achieve 0.4 when using 400 features. The improvements of copula 1161 Figure 2: Varying the amount of features. Left column: pre-2009 dataset. Middle column: 2009 dataset. Right column: post-2009 dataset. Top row: Spearman’s correlation. Bottom row: Kendall’s tau. Pre-2009 2009 Post-2009 2008/CD 2008 first quarter 2008 million/CD revenue/NN third quarter 2008/CD revenue third million quarter of third/JJ million in compared to the third the fourth million in million/CD fourth quarter Peter/PERSON capital fourth call million fourth/JJ first/JJ FE Trajector entity $/$ million/CD Table 3: Top-10 features that have positive correlations with stock volatility in three datasets. model over squared loss linear regression model are increasing, when working with larger feature spaces. 6.5 Qualitative Analysis Like linear classifiers, by “opening the hood” to the Gaussian copula regression model, one can examine features that exhibit high correlations with the dependent variable. Table 3 shows the top features that are positively correlated with the future stock volatility in the three datasets. On the top features from the “pre-2009” dataset, which primarily (82%) includes calls from 2008, we can clearly observe that the word “2008” has strong correlation with the financial risks. Interestingly, the phrase “third quarter” and its variations, not only play an important role in the model, but also highly correlated to the timeline of the financial crisis: the Q3 of 2008 is a critical period in the recession, where Lehman Brothers falls on the Sept. 15 of 2008, filing $613 billion of debt — the biggest bankruptcy in U.S. history (Mamudi, 2008). This huge panic soon broke out in various financial institutions in the Wall Street. On the top features from “2009” dataset, again, we see the word “2008” is still prominent in predicting financial risks, indicating the hardship and extended impacts from the center of the economic crisis. After examining the transcripts, we found sentences like: “...our specialty lighting business that we discontinued in the fourth quarter of 2008...”, “...the exception of fourth quarter revenue which was $100,000 below our guidance target...”, and “...to address changing economic conditions and their impact on our operations, in the fourth quarter we took the painful but prudent step of decreasing our headcount by about 5%...”, showing the crucial role that Q4 of 2008 plays in 2009 earnings calls. Interestingly, after the 2008-2009 crisis, in the recovery period, we have observed new words like “revenue”, indicating the “back-tonormal” trend of financial environment, and new features that predict financial volatility. 7 Discussions In the experimental section, we notice that the proposed semiparametric Gaussian copula model has obtained promising results in various setups on three datasets in this text regression task. The 1162 main questions we ask are: how is the proposed model different from standard text regression/classification models? What are the advantages of copula-based models, and what makes it perform so well? One advantage we see from the copula model is that it does not require any assumptions on the marginal distributions. For example, in latent Dirichlet allocation (Blei et al., 2003), the topic proportion of a document is always drawn from a Dirichlet(α) distribution. This is rather restricted, because the possible shapes from a K −1 simplex of Dirichlet is always limited in some sense. In our copula model, instead of using some priors, we just calculate the empirical cumulative distribution function of the random variables, and model the correlation among them. This is extremely practical, because in many natural language processing tasks, we often have to deal with features that are extracted from many different domains and signals. By applying the Probability Integral Transform to raw features in the copula model, we essentially avoid comparing apples and oranges in the feature space, which is a common problem in bag-of-features models in NLP. The second hypothesis is about the semiparametirc parameterization, which contains the nonparametric kernel density estimation and the parametric Gaussian copula regression components. The benefit of a semiparametric model is that here we are not interested in performing completely nonparametric estimations, where the infinite dimensional parameters might bring intractability. In contrast, by considering the semiparametric case, we not only obtain some expressiveness from the nonparametric models, but also reduce the complexity of the task: we are only interested in the finite-dimensional components Σ in the Gaussian copula with O(n log n) complexity, which is not as computationally difficult as the completely nonparametric cases. Also, by modeling the marginals and their correlations seperately, our approach is cleaner, easy-to-understand, and allows us to have more flexibility to model the uncertainty of data. Our pilot experiment also aligns with our hypothesis: when not performing the kernel density estimation part for smoothing out the marginal distributions, the performances dropped significantly when sparser features are included. The third advantage we observe is the power of modeling the covariance of the random variables. Traditionally, in statistics, independent and identically distributed (i.i.d) assumptions among the instances and the random variables are often used in various models, such that the correlations among the instances or the variables are often ignored. However, this might not be practical at all: in image processing, the “cloud” pixel of a pixel showing the blue sky of a picture are more likelihood to co-occur in the same picture; in natural language processing, the word “mythical” is more likely to co-occur with the word “unicorn”, rather than the word “popcorn”. Therefore, by modeling the correlations among marginal CDFs, the copula model has gained the insights on the dependency structures of the random variables, and thus, the performance of the regression task is boosted. In the future, we plan to apply the proposed approach to large datasets where millions of features and millions of instances are involved. Currently we have not experienced the difficulty when estimating the Gaussian copula model, but parallel methods might be needed to speedup learning when significantly more marginal CDFs are involved. The second issue is about overfitting. We see that when features are rather noisy, we might need to investigate regularized copula models to avoid this. Finally, we plan to extend the proposed approach to text classification and structured prediction problems in NLP. 8 Conclusion In this work, we have demonstrated that the more complex quarterly earnings calls can also be used to predict the measured volatility of the stocks in the limited future. We propose a novel semiparametric Gausian copula regression approach that models the dependency structure of the language in the earnings calls. Unlike traditional bag-offeatures models that work discrete features from various signals, we perform kernel density estimation to smooth out the distribution, and use probability integral transform to work with CDFs that are uniform. The copula model deals with marginal CDFs and the correlation among them separately, in a cleaner manner that is also flexible to parameterize. Focusing on the three financial crisis related datasets, the proposed model significantly outperform the standard linear regression method in statistics and strong discriminative support vector regression baselines. By varying the size of the training data and the dimensionality of the covariates, we have demonstrated that our proposed model is relatively robust across different parameter settings. Acknowledgement We thank Alex Smola, Barnab´as P´oczos, Sam Thomson, Shoou-I Yu, Zi Yang, and anonymous reviewers for their useful comments. 1163 References Joshua Albrecht and Rebecca Hwa. 2007. Regression for sentence-level mt evaluation with pseudo references. In Proceedings of Annual Meeting of the Association for Computational Linguistics. David Blei, Andrew Ng, and Michael Jordan. 2003. Latent dirichlet allocation. Journal of machine Learning research. Johan Bollen, Huina Mao, and Xiaojun Zeng. 2011. Twitter mood predicts the stock market. Journal of Computational Science. Belinda Camiciottoli. 2010. Earnings calls: Exploring an emerging financial reporting genre. Discourse & Communication. Victor Chahuneau, Kevin Gimpel, Bryan R Routledge, Lily Scherlis, and Noah A Smith. 2012. Word salad: Relating food prices and descriptions. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Chih-Chung Chang and Chih-Jen Lin. 2011. Libsvm: a library for support vector machines. ACM Transactions on Intelligent Systems and Technology. Xiaohong Chen and Yanqin Fan. 2006. Estimation of copula-based semiparametric time series models. Journal of Econometrics. David Christensen. 2005. Fast algorithms for the calculation of kendalls τ. Computational Statistics. Kenneth Church and William Gale. 1995. Poisson mixtures. Natural Language Engineering. Dipanjan Das, Nathan Schneider, Desai Chen, and Noah A Smith. 2010. Probabilistic frame-semantic parsing. In Human language technologies: The 2010 annual conference of the North American chapter of the association for computational linguistics. Francis X Diebold, Todd A Gunther, and Anthony S Tay. 1997. Evaluating density forecasts. Carsten Eickhoff, Arjen P. de Vries, and Kevyn Collins-Thompson. 2013. Copulas for information retrieval. In Proceedings of the 36th International ACM SIGIR Conference on Research and Development in Information Retrieval. Jacob Eisenstein, Brendan O’Connor, Noah A Smith, and Eric P Xing. 2010. A latent variable model for geographic lexical variation. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. Pui Cheong Fung, Xu Yu, and Wai Lam. 2003. Stock prediction: Integrating text mining approach using real-time news. In Proceedings of IEEE International Conference on Computational Intelligence for Financial Engineering. Alan Gelfand and Adrian Smith. 1990. Samplingbased approaches to calculating marginal densities. Journal of the American statistical association. Christian Genest and Anne-Catherine Favre. 2007. Everything you always wanted to know about copula modeling but were afraid to ask. Journal of Hydrologic Engineering. Zoubin Ghahramani, Barnab´as P´oczos, and Jeff Schneider. 2012. Copula-based kernel dependency measures. In Proceedings of the 29th International Conference on Machine Learning. Fang Han, Tuo Zhao, and Han Liu. 2012. Coda: High dimensional copula discriminant analysis. Journal of Machine Learning Research. Donna Harman. 1995. Overview of the second text retrieval conference (trec-2). Information Processing & Management. Robert V Hogg and Allen Craig. 1994. Introduction to mathematical statistics. Harry Joe. 1997. Multivariate models and dependence concepts. Mahesh Joshi, Dipanjan Das, Kevin Gimpel, and Noah A Smith. 2010. Movie reviews and revenues: An experiment in text regression. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Maurice Kendall. 1938. A new measure of rank correlation. Biometrika. Shimon Kogan, Dimitry Levin, Bryan Routledge, Jacob Sagi, and Noah Smith. 2009. Predicting risk from financial reports with regression. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics. John Lafferty and David Blei. 2005. Correlated topic models. In Advances in neural information processing systems. Mirella Lapata. 2006. Automatic evaluation of information ordering: Kendall’s tau. Computational Linguistics. Han Liu, Fang Han, Ming Yuan, John Lafferty, and Larry Wasserman. 2012. High-dimensional semiparametric gaussian copula graphical models. The Annals of Statistics. David Lopez-paz, Jose M Hern´andez-lobato, and Ghahramani Zoubin. 2013. Gaussian process vine copulas for multivariate dependence. In Proceedings of the 30th International Conference on Machine Learning. Sam Mamudi. 2008. Lehman folds with record $613 billion debt. MarketWatch.com. 1164 Guido Masarotto and Cristiano Varin. 2012. Gaussian copula marginal regression. Electronic Journal of Statistics. Roger B Nelsen. 1999. An introduction to copulas. Springer Verlag. Rahul A Parsa and Stuart A Klugman. 2011. Copula regression. Variance Advancing and Science of Risk. Emanuel Parzen. 1962. On estimation of a probability density function and mode. The annals of mathematical statistics. Michael Pitt, David Chan, and Robert Kohn. 2006. Efficient bayesian inference for gaussian copula regression models. Biometrika. McKay Price, James Doran, David Peterson, and Barbara Bliss. 2012. Earnings conference calls and stock returns: The incremental informativeness of textual tone. Journal of Banking & Finance. Joseph Reisinger, Austin Waters, Bryan Silverthorn, and Raymond J Mooney. 2010. Spherical topic models. In Proceedings of the 27th International Conference on Machine Learning. Berthold Schweizer and Abe Sklar. 1983. Probabilistic metric spaces. Claude Shannon. 1948. A mathematical theory of communication. In The Bell System Technical Journal. Abe Sklar. 1959. Fonctions de r´epartition `a n dimensions et leurs marges. Universit´e Paris 8. Kristina Toutanova, Dan Klein, Christopher D Manning, and Yoram Singer. 2003. Feature-rich part-ofspeech tagging with a cyclic dependency network. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology. Ming-Feng Tsai and Chuan-Ju Wang. 2013. Risk ranking from financial reports. In Advances in Information Retrieval. William Yang Wang, Elijah Mayfield, Suresh Naidu, and Jeremiah Dittmar. 2012. Historical analysis of legal opinions with a sparse mixed-effects latent variable model. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics. Chuan-Ju Wang, Ming-Feng Tsai, Tse Liu, and ChinTing Chang. 2013. Financial sentiment analysis for risk prediction. In Proceedings of the Sixth International Joint Conference on Natural Language Processing. Rongjing Xiang and Jennifer Neville. 2013. Collective inference for network data with copula latent markov networks. In Proceedings of the sixth ACM international conference on Web search and data mining. Boyi Xie, Rebecca J. Passonneau, Leon Wu, and Germ´an G. Creamer. 2013. Semantic frames to predict stock price movement. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. Dani Yogatama, Michael Heilman, Brendan O’Connor, Chris Dyer, Bryan R Routledge, and Noah A Smith. 2011. Predicting a scientific community’s response to an article. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Xue Zhang, Hauke Fuehres, and Peter A Gloor. 2011. Predicting stock market indicators through twitter “i hope it is not as bad as i fear”. Procedia-Social and Behavioral Sciences. 1165
2014
109
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 111–121, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Bilingually-constrained Phrase Embeddings for Machine Translation Jiajun Zhang1, Shujie Liu2, Mu Li2, Ming Zhou2 and Chengqing Zong1 1National Laboratory of Pattern Recognition, CASIA, Beijing, P.R. China {jjzhang,cqzong}@nlpr.ia.ac.cn 2Microsoft Research Asia, Beijing, P.R. China {shujliu,muli,mingzhou}@microsoft.com Abstract We propose Bilingually-constrained Recursive Auto-encoders (BRAE) to learn semantic phrase embeddings (compact vector representations for phrases), which can distinguish the phrases with different semantic meanings. The BRAE is trained in a way that minimizes the semantic distance of translation equivalents and maximizes the semantic distance of nontranslation pairs simultaneously. After training, the model learns how to embed each phrase semantically in two languages and also learns how to transform semantic embedding space in one language to the other. We evaluate our proposed method on two end-to-end SMT tasks (phrase table pruning and decoding with phrasal semantic similarities) which need to measure semantic similarity between a source phrase and its translation candidates. Extensive experiments show that the BRAE is remarkably effective in these two tasks. 1 Introduction Due to the powerful capacity of feature learning and representation, Deep (multi-layer) Neural Networks (DNN) have achieved a great success in speech and image processing (Kavukcuoglu et al., 2010; Krizhevsky et al., 2012; Dahl et al., 2012). Recently, statistical machine translation (SMT) community has seen a strong interest in adapting and applying DNN to many tasks, such as word alignment (Yang et al., 2013), translation confidence estimation (Mikolov et al., 2010; Liu et al., 2013; Zou et al., 2013), phrase reordering prediction (Li et al., 2013), translation modelling (Auli et al., 2013; Kalchbrenner and Blunsom, 2013) and language modelling (Duh et al., 2013; Vaswani et al., 2013). Most of these works attempt to improve some components in SMT based on word embedding, which converts a word into a dense, low dimensional, real-valued vector representation (Bengio et al., 2003; Bengio et al., 2006; Collobert and Weston, 2008; Mikolov et al., 2013). However, in the conventional (phrase-based) SMT, phrases are the basic translation units. The models using word embeddings as the direct inputs to DNN cannot make full use of the whole syntactic and semantic information of the phrasal translation rules. Therefore, in order to successfully apply DNN to model the whole translation process, such as modelling the decoding process, learning compact vector representations for the basic phrasal translation units is the essential and fundamental work. In this paper, we explore the phrase embedding, which represents a phrase (sequence of words) with a real-valued vector. In some previous works, phrase embedding has been discussed from different views. Socher et al. (2011) make the phrase embeddings capture the sentiment information. Socher et al. (2013a) enable the phrase embeddings to mainly capture the syntactic knowledge. Li et al. (2013) attempt to encode the reordering pattern in the phrase embeddings. Kalchbrenner and Blunsom (2013) utilize a simple convolution model to generate phrase embeddings from word embeddings. Mikolov et al. (2013) consider a phrase as an indivisible n-gram. Obviously, these methods of learning phrase embeddings either focus on some aspects of the phrase (e.g. reordering pattern), or impose strong assumptions (e.g. bagof-words or indivisible n-gram). Therefore, these phrase embeddings are not suitable to fully represent the phrasal translation units in SMT due to the lack of semantic meanings of the phrase. Instead, we focus on learning phrase embeddings from the view of semantic meaning, so that our phrase embedding can fully represent the phrase and best fit the phrase-based SMT. Assuming the phrase is a meaningful composition 111 of its internal words, we propose Bilinguallyconstrained Recursive Auto-encoders (BRAE) to learn semantic phrase embeddings. The core idea behind is that a phrase and its correct translation should share the same semantic meaning. Thus, they can supervise each other to learn their semantic phrase embeddings. Similarly, non-translation pairs should have different semantic meanings, and this information can also be used to guide learning semantic phrase embeddings. In our method, the standard recursive autoencoder (RAE) pre-trains the phrase embedding with an unsupervised algorithm by minimizing the reconstruction error (Socher et al., 2010), while the bilingually-constrained model learns to finetune the phrase embedding by minimizing the semantic distance between translation equivalents and maximizing the semantic distance between non-translation pairs. We use an example to explain our model. As illustrated in Fig. 1, the Chinese phrase on the left and the English phrase on the right are translations with each other. If we learn the embedding of the Chinese phrase correctly, we can regard it as the gold representation for the English phrase and use it to guide the process of learning English phrase embedding. In the other direction, the Chinese phrase embedding can be learned in the same way. This procedure can be performed with an co-training style algorithm so as to minimize the semantic distance between the translation equivalents 1. In this way, the result Chinese and English phrase embeddings will capture the semantics as much as possible. Furthermore, a transformation function between the Chinese and English semantic spaces can be learned as well. With the learned model, we can accurately measure the semantic similarity between a source phrase and a translation candidate. Accordingly, we evaluate the BRAE model on two end-toend SMT tasks (phrase table pruning and decoding with phrasal semantic similarities) which need to check whether a translation candidate and the source phrase are in the same meaning. In phrase table pruning, we discard the phrasal translation rules with low semantic similarity. In decoding with phrasal semantic similarities, we apply the semantic similarities of the phrase pairs as new features during decoding to guide translation can1For simplicity, we do not show non-translation pairs here. source phrase embedding ps 法国 和 俄罗斯 France and Russia target phrase embedding pt Figure 1: A motivation example for the BRAE model. didate selection. The experiments show that up to 72% of the phrase table can be discarded without significant decrease on the translation quality, and in decoding with phrasal semantic similarities up to 1.7 BLEU score improvement over the state-ofthe-art baseline can be achieved. In addition, our semantic phrase embeddings have many other potential applications. For instance, the semantic phrase embeddings can be directly fed to DNN to model the decoding process. Besides SMT, the semantic phrase embeddings can be used in other cross-lingual tasks (e.g. cross-lingual question answering) and monolingual applications such as textual entailment, question answering and paraphrase detection. 2 Related Work Recently, phrase embedding has drawn more and more attention. There are three main perspectives handling this task in monolingual languages. One method considers the phrases as bag-ofwords and employs a convolution model to transform the word embeddings to phrase embeddings (Collobert et al., 2011; Kalchbrenner and Blunsom, 2013). Gao et al. (2013) also use bag-ofwords but learn BLEU sensitive phrase embeddings. This kind of approaches does not take the word order into account and loses much information. Instead, our bilingually-constrained recursive auto-encoders not only learn the composition mechanism of generating phrases from words, but also fine tune the word embeddings during the model training stage, so that we can induce the full information of the phrases and internal words. Another method (Mikolov et al., 2013) deals with the phrases having a meaning that is not a simple composition of the meanings of its individual words, such as New York Times. They first find the phrases of this kind. Then, they regard these phrases as indivisible units, and learn their embeddings with the context information. How112 ever, this kind of phrase embedding is hard to capture full semantics since the context of a phrase is limited. Furthermore, this method can only account for a very small part of phrases, since most of the phrases are compositional. In contrast, our method attempts to learn the semantic vector representation for any phrase. The third method views any phrase as the meaningful composition of its internal words. The recursive auto-encoder is typically adopted to learn the way of composition (Socher et al., 2010; Socher et al., 2011; Socher et al., 2013a; Socher et al., 2013b; Li et al., 2013). They pre-train the RAE with an unsupervised algorithm. And then, they fine-tune the RAE according to the label of the phrase, such as the syntactic category in parsing (Socher et al., 2013a), the polarity in sentiment analysis (Socher et al., 2011; Socher et al., 2013b), and the reordering pattern in SMT (Li et al., 2013). This kind of semi-supervised phrase embedding is in fact performing phrase clustering with respect to the phrase label. For example, in the RAEbased phrase reordering model for SMT (Li et al., 2013), the phrases with the similar reordering tendency (e.g. monotone or swap) are close to each other in the embedding space, such as the prepositional phrases. Obviously, this kind methods of semi-supervised phrase embedding do not fully address the semantic meaning of the phrases. Although we also follow the composition-based phrase embedding, we are the first to focus on the semantic meanings of the phrases and propose a bilingually-constrained model to induce the semantic information and learn transformation of the semantic space in one language to the other. 3 Bilingually-constrained Recursive Auto-encoders This section introduces the Bilinguallyconstrained Recursive Auto-encoders (BRAE), that is inspired by two observations. First, the recursive auto-encoder provides a reasonable composition mechanism to embed each phrase. And the semi-supervised phrase embedding (Socher et al., 2011; Socher et al., 2013a; Li et al., 2013) further indicates that phrase embedding can be tuned with respect to the label. Second, even though we have no correct semantic phrase representation as the gold label, the phrases sharing the same meaning provide an indirect but feasible way. x1 x2 x3 x4 y1=f(W(1)[x1; x2]+b) y2=f(W(1)[y1; x3]+b) y3=f(W(1)[y2; x4]+b) Figure 2: A recursive auto-encoder for a fourword phrase. The empty nodes are the reconstructions of the input. We will first briefly present the unsupervised phrase embedding, and then describe the semisupervised framework. After that, we introduce the BRAE on the network structure, objective function and parameter inference. 3.1 Unsupervised Phrase Embedding 3.1.1 Word Vector Representations In phrase embedding using composition, the word vector representation is the basis and serves as the input to the neural network. After learning word embeddings with DNN (Bengio et al., 2003; Collobert and Weston, 2008; Mikolov et al., 2013), each word in the vocabulary V corresponds to a vector x ∈Rn, and all the vectors are stacked into an embedding matrix L ∈Rn×|V |. Given a phrase which is an ordered list of m words, each word has an index i into the columns of the embedding matrix L. The index i is used to retrieve the word’s vector representation using a simple multiplication with a binary vector e which is zero in all positions except for the ith index: xi = Lei ∈Rn (1) Note that n is usually set empirically, such as n = 50, 100, 200. Throughout this paper, n = 3 is used for better illustration as shown in Fig. 1. 3.1.2 RAE-based Phrase Embedding Assuming we are given a phrase w1w2 · · · wm, it is first projected into a list of vectors (x1, x2, · · · , xm) using Eq. 1. The RAE learns the vector representation of the phrase by recursively combining two children vectors in a bottomup manner (Socher et al., 2011). Fig. 2 illustrates an instance of a RAE applied to a binary tree, in 113 which a standard auto-encoder (in box) is re-used at each node. The standard auto-encoder aims at learning an abstract representation of its input. For two children c1 = x1 and c2 = x2, the autoencoder computes the parent vector y1 as follows: p = f(W (1)[c1; c2] + b(1)) (2) Where we multiply the parameter matrix W (1) ∈ Rn×2n by the concatenation of two children [c1; c2] ∈R2n×1. After adding a bias term b(1), we apply an element-wise activation function such as f = tanh(·), which is used in our experiments. In order to apply this auto-encoder to each pair of children, the representation of the parent p should have the same dimensionality as the ci’s. To assess how well the parent’s vector represents its children, the standard auto-encoder reconstructs the children in a reconstruction layer: [c′ 1; c′ 2] = f(2)(W (2)p + b(2)) (3) Where c′ 1 and c′ 2 are reconstructed children, W (2) and b(2) are parameter matrix and bias term for reconstruction respectively, and f(2) = tanh(·). To obtain the optimal abstract representation of the inputs, the standard auto-encoder tries to minimize the reconstruction errors between the inputs and the reconstructed ones during training: Erec([c1; c2]) = 1 2||[c1; c2] −[c′ 1; c′ 2]||2 (4) Given y1 = p, we can use Eq. 2 again to compute y2 by setting the children to be [c1; c2] = [y1; x3]. The same auto-encoder is re-used until the vector of the whole phrase is generated. For unsupervised phrase embedding, the only objective is to minimize the sum of reconstruction errors at each node in the optimal binary tree: RAEθ(x) = argmin y∈A(x) X s∈y Erec([c1; c2]s) (5) Where x is the list of vectors of a phrase, and A(x) denotes all the possible binary trees that can be built from inputs x. A greedy algorithm (Socher et al., 2011) is used to generate the optimal binary tree y. The parameters θ = (W, b) are optimized over all the phrases in the training data. 3.2 Semi-supervised Phrase Embedding The above RAE is completely unsupervised and can only induce general representations of the Reconstruction Error Prediction Error W(1) W(2) W(label) Figure 3: An illustration of a semi-supervised RAE unit. Red nodes show the label distribution. multi-word phrases. Several researchers extend the original RAEs to a semi-supervised setting so that the induced phrase embedding can predict a target label, such as polarity in sentiment analysis (Socher et al., 2011), syntactic category in parsing (Socher et al., 2013a) and phrase reordering pattern in SMT (Li et al., 2013). In the semi-supervised RAE for phrase embedding, the objective function over a (phrase, label) pair (x, t) includes the reconstruction error and the prediction error, as illustrated in Fig. 3. E(x, t; θ) = αErec(x, t; θ)+(1−α)Epred(x, t; θ) (6) Where the hyper-parameter α is used to balance the reconstruction and prediction error. For label prediction, the cross-entropy error is usually used to calculate Epred. By optimizing the above objective, the phrases in the vector embedding space will be grouped according to the labels. 3.3 The BRAE Model We know from the semi-supervised phrase embedding that the learned vector representation can be well adapted to the given label. Therefore, we can imagine that learning semantic phrase embedding is reasonable if we are given gold vector representations of the phrases. However, no gold semantic phrase embedding exists. Fortunately, we know the fact that the two phrases should share the same semantic representation if they express the same meaning. We can make inference from this fact that if a model can learn the same embedding for any phrase pair sharing the same meaning, the learned embedding must encode the semantics of the phrases and the corresponding model is our desire. As translation equivalents share the same semantic meaning, we employ high-quality phrase translation pairs as training corpus in this work. Accordingly, we propose the Bilinguallyconstrained Recursive Auto-encoders (BRAE), 114 Source Reconstruction Error Source Prediction Error Ws (1) Ws (2) Ws (label) Target Reconstruction Error Wt (1) Wt (2) Wt (label) Target Prediction Error Source Language Phrase Target Language Phrase Figure 4: An illustration of the bilingualconstrained recursive auto-encoders. The two phrases are translations with each other. whose basic goal is to minimize the semantic distance between the phrases and their translations. 3.3.1 The Objective Function Unlike previous methods, the BRAE model jointly learns two RAEs (Fig. 4 shows the network structure): one for source language and the other for target language. For a phrase pair (s, t), two kinds of errors are involved: 1. reconstruction error Erec(s, t; θ): how well the learned vector representations ps and pt represent the phrase s and t respectively? Erec(s, t; θ) = Erec(s; θ) + Erec(t; θ) (7) 2. semantic error Esem(s, t; θ): what is the semantic distance between the learned vector representations ps and pt? Since word embeddings for two languages are learned separately and locate in different vector space, we do not enforce the phrase embeddings in two languages to be in the same semantic vector space. We suppose there is a transformation between the two semantic embedding spaces. Thus, the semantic distance is bidirectional: the distance between pt and the transformation of ps, and that between ps and the transformation of pt. As a result, the overall semantic error becomes: Esem(s, t; θ) = Esem(s|t, θ) + Esem(t|s, θ) (8) Where Esem(s|t, θ) = Esem(pt, f(W l sps + bl s)) means the transformation of ps is performed as follows: we first multiply a parameter matrix W l s by ps, and after adding a bias term bl s we apply an element-wise activation function f = tanh(·). Finally, we calculate their Euclidean distance: Esem(s|t, θ) = 1 2||pt −f(W l sps + bl s)|| 2 (9) Esem(t|s, θ) can be calculated in exactly the same way. For the phrase pair (s, t), the joint error is: E(s, t; θ) = αErec(s, t; θ) + (1 −α)Esem(s, t; θ) (10) The hyper-parameter α weights the reconstruction and semantic error. The final BRAE objective over the phrase pairs training set (S, T) becomes: JBRAE = 1 N X (s,t)∈(S,T) E(s, t; θ) + λ 2 ||θ||2 (11) 3.3.2 Max-Semantic-Margin Error Ideally, we want the learned BRAE model can make sure that the semantic error for the positive example (a source phrase s and its correct translation t) is much smaller than that for the negative example (the source phrase s and a bad translation t′). However, the current model cannot guarantee this since the above semantic error Esem(s|t, θ) only accounts for positive ones. We thus enhance the semantic error with both positive and negative examples, and the corresponding max-semantic-margin error becomes: E∗ sem(s|t, θ) = max{0, Esem(s|t, θ) −Esem(s|t′, θ) + 1} (12) It tries to minimize the semantic distance between translation equivalents and maximize the semantic distance between non-translation pairs simultaneously. Using the above error function, we need to construct a negative example for each positive example. Suppose we are given a positive example (s, t), the correct translation t can be converted into a bad translation t′ by replacing the words in t with randomly chosen target language words. Then, a negative example (s, t′) is available. 3.3.3 Parameter Inference Like semi-supervised RAE (Li et al., 2013), the parameters θ in our BRAE model can also be divided into three sets: θL: word embedding matrix L for two languages (Section 3.1.1); θrec: recursive auto-encoder parameter matrices W (1), W (2), and bias terms b(1), b(2) for two languages (Section 3.1.2); θsem: transformation matrix W l and bias term bl for two directions in semantic distance computation (Section 3.3.1). 115 To have a deep understanding of the parameters, we rewrite Eq. 10: E(s, t; θ) = α(Erec(s; θ) + Erec(t; θ)) + (1 −α)(E∗ sem(s|t, θ) + E∗ sem(t|s, θ)) = (αErec(s; θs) + (1 −α)E∗ sem(s|t, θs)) + (αErec(t; θt) + (1 −α)E∗ sem(t|s, θt)) (13) We can see that the parameters θ can be divided into two classes: θs for the source language and θt for the target language. The above equation also indicates that the source-side parameters θs can be optimized independently as long as the semantic representation pt of the target phrase t is given to compute Esem(s|t, θ) with Eq. 9. It is similar for the target-side parameters θt. Assuming the target phrase representation pt is available, the optimization of the source-side parameters is similar to that of semi-supervised RAE. We apply the Stochastic Gradient Descent (SGD) algorithm to optimize each parameter: θs = θs −η∂Js ∂θs (14) In order to run SGD algorithm, we need to solve two problems: one for parameter initialization and the other for partial gradient calculation. In parameter initialization, θrec and θsem for the source language is randomly set according to a normal distribution. For the word embedding Ls, there are two choices. First, Ls is initialized randomly like other parameters. Second, the word embedding matrix Ls is pre-trained with DNN (Bengio et al., 2003; Collobert and Weston, 2008; Mikolov et al., 2013) using large-scale unlabeled monolingual data. We prefer to the second one since this kind of word embedding has already encoded some semantics of the words. In this work, we employ the toolkit Word2Vec (Mikolov et al., 2013) to pre-train the word embedding for the source and target languages. The word embeddings will be fine-tuned in our BRAE model to capture much more semantics. The partial gradient for one instance is computed as follows: ∂Js ∂θs = ∂E(s|t, θs) ∂θs + λθs (15) Where the source-side error given the target phrase representation includes reconstruction error and updated semantic error: E(s|t, θs) = αErec(s; θs) + (1 −α)E∗ sem(s|t, θs) (16) Given the current θs, we first construct the binary tree (as illustrated in Fig. 2) for any source-side phrase using the greedy algorithm (Socher et al., 2011). Then, the derivatives for the parameters in the fixed binary tree will be calculated via backpropagation through structures (Goller and Kuchler, 1996). Finally, the parameters will be updated using Eq. 14 and a new θs is obtained. The target-side parameters θt can be optimized in the same way as long as the source-side phrase representation ps is available. It seems a paradox that updating θs needs pt while updating θt needs ps. To solve this problem, we propose an co-training style algorithm which includes three steps: 1. Pre-training: applying unsupervised phrase embedding with standard RAE to pre-train the source- and target-side phrase representations ps and pt respectively (Section 2.1.2); 2. Fine-tuning: with the BRAE model, using target-side phrase representation pt to update the source-side parameters θs and obtain the finetuned source-side phrase representation p′ s, and meanwhile using ps to update θt and get the finetuned p′ t, and then calculate the joint error over the training corpus; 3. Termination Check: if the joint error reaches a local minima or the iterations reach the pre-defined number (25 is used in our experiments), we terminate the training procedure, otherwise we set ps = p′ s, pt = p′ t, and go to step 2. 4 Experiments With the semantic phrase embeddings and the vector space transformation function, we apply the BRAE to measure the semantic similarity between a source phrase and its translation candidates in the phrase-based SMT. Two tasks are involved in the experiments: phrase table pruning that discards entries whose semantic similarity is very low and decoding with the phrasal semantic similarities as additional new features. 4.1 Hyper-Parameter Settings The hyper-parameters in the BRAE model include the dimensionality of the word embedding n in Eq. 1, the balance weight α in Eq. 10, λs in Eq. 11, and the learning rate η in Eq. 14. For the dimensionality n, we have tried three settings n = 50, 100, 200 in our experiments. We 116 empirically set the learning rate η = 0.01. We draw α from 0.05 to 0.5 with step 0.05, and λs from {10−6, 10−5, 10−4, 10−3, 10−2}. The overall error of the BRAE model is employed to guide the search procedure. Finally, we choose α = 0.15, λL = 10−2, λrec = 10−3 and λsem = 10−3. 4.2 SMT Setup We have implemented a phrase-based translation system with a maximum entropy based reordering model using the bracketing transduction grammar (Wu, 1997; Xiong et al., 2006). The SMT evaluation is conducted on Chineseto-English translation. Accordingly, our BRAE model is trained on Chinese and English. The bilingual training data from LDC 2 contains 0.96M sentence pairs and 1.1M entity pairs with 27.7M Chinese words and 31.9M English words. A 5gram language model is trained on the Xinhua portion of the English Gigaword corpus and the English part of bilingual training data. The NIST MT03 is used as the development data. NIST MT04-06 and MT08 (news data) are used as the test data. Case-insensitive BLEU is employed as the evaluation metric. The statistical significance test is performed by the re-sampling approach (Koehn, 2004). In addition, we pre-train the word embedding with toolkit Word2Vec on large-scale monolingual data including the aforementioned data for SMT. The monolingual data contains 1.06B words for Chinese and 1.12B words for English. To obtain high-quality bilingual phrase pairs to train our BRAE model, we perform forced decoding for the bilingual training sentences and collect the phrase pairs used. After removing the duplicates, the remaining 1.12M bilingual phrase pairs (length ranging from 1 to 7) are obtained. 4.3 Phrase Table Pruning Pruning most of the phrase table without much impact on translation quality is very important for translation especially in environments where memory and time constraints are imposed. Many algorithms have been proposed to deal with this problem, such as significance pruning (Johnson et al., 2007; Tomeh et al., 2009), relevance pruning (Eck et al., 2007) and entropy-based pruning 2LDC category numbers: LDC2000T50, LDC2002L27, LDC2003E07, LDC2003E14, LDC2004T07, LDC2005T06, LDC2005T10 and LDC2005T34. (Ling et al., 2012; Zens et al., 2012). These algorithms are based on corpus statistics including cooccurrence statistics, phrase pair usage and composition information. For example, the significance pruning, which is proven to be a very effective algorithm, computes the probability named p-value, that tests whether a source phrase s and a target phrase t co-occur more frequently in a bilingual corpus than they happen just by chance. The higher the p-value, the more likely of the phrase pair to be spurious. Our work has the same objective, but instead of using corpus statistics, we attempt to measure the quality of the phrase pair from the view of semantic meaning. Given a phrase pair (s, t), the BRAE model first obtains their semantic phrase representations (ps, pt), and then transforms ps into target semantic space ps∗, pt into source semantic space pt∗. We finally get two similarities Sim(ps∗, pt) and Sim(pt∗, ps). Phrase pairs that have a low similarity are more likely to be noise and more prone to be pruned. In experiments, we discard the phrase pair whose similarity in two directions are smaller than a threshold 3. Table 1 shows the comparison results between our BRAE-based pruning method and the significance pruning algorithm. We can see a common phenomenon in both of the algorithms: for the first few thresholds, the phrase table becomes smaller and smaller while the translation quality is not much decreased, but the performance jumps a lot at a certain threshold (16 for Significance pruning, 0.8 for BRAE-based one). Specifically, the Significance algorithm can safely discard 64% of the phrase table at its threshold 12 with only 0.1 BLEU loss in the overall test. In contrast, our BRAE-based algorithm can remove 72% of the phrase table at its threshold 0.7 with only 0.06 BLEU loss in the overall evaluation. When the two algorithms using a similar portion of the phrase table 4 (35% in BRAE and 36% in Significance), the BRAE-based algorithm outperforms the Significance algorithm on all the test sets except for MT04. It indicates that our BRAE model is a good alternative for phrase table pruning. Furthermore, our model is much more in3To avoid the situation that all the translation candidates for a source phrase are pruned, we always keep the first 10 best according to the semantic similarity. 4In the future, we will compare the performance by enforcing the two algorithms to use the same portion of phrase table 117 Method Threshold PhraseTable MT03 MT04 MT05 MT06 MT08 ALL Baseline 100% 35.81 36.91 34.69 33.83 27.17 34.82 BRAE 0.4 52% 35.94 36.96 35.00 34.71 27.77 35.16 0.5 44% 35.67 36.59 34.86 33.91 27.25 34.89 0.6 35% 35.86 36.71 34.93 34.63 27.34 35.05 0.7 28% 35.55 36.62 34.57 33.97 27.10 34.76 0.8 20% 35.06 36.01 34.13 33.04 26.66 34.04 Significance 8 48% 35.86 36.99 34.74 34.53 27.59 35.13 12 36% 35.59 36.73 34.65 34.17 27.16 34.72 16 25% 35.19 36.24 34.26 33.32 26.55 34.09 20 18% 35.05 36.09 34.02 32.98 26.37 33.97 Table 1: Comparison between BRAE-based pruning and Significance pruning of phrase table. Threshold means similarity in BRAE and negative-log-p-value in Significance. ”ALL” combines the development and test sets. Bold numbers denote that the result is better than or comparable to that of baseline. n = 50 is used for embedding dimensionality. tuitive because it is directly based on the semantic similarity. 4.4 Decoding with Phrasal Semantic Similarities Besides using the semantic similarities to prune the phrase table, we also employ them as two informative features like the phrase translation probability to guide translation hypotheses selection during decoding. Typically, four translation probabilities are adopted in the phrase-based SMT, including phrase translation probability and lexical weights in both directions. The phrase translation probability is based on co-occurrence statistics and the lexical weights consider the phrase as bag-of-words. In contrast, our BRAE model focuses on compositional semantics from words to phrases. Therefore, the semantic similarities computed using our BRAE model are complementary to the existing four translation probabilities. The semantic similarities in two directions Sim(ps∗, pt) and Sim(pt∗, ps) are integrated into our baseline phrase-based model. In order to investigate the influence of the dimensionality of the embedding space, we have tried three different settings n = 50, 100, 200. As shown in Table 2, no matter what n is, the BRAE model can significantly improve the translation quality in the overall test data. The largest improvement can be up to 1.7 BLEU score (MT06 for n = 50). It is interesting that with dimensionality growing, the translation performance is not consistently improved. We speculate that using n = 50 or n = 100 can already distinguish good translation candidates from bad ones. 4.5 Analysis on Semantic Phrase Embedding To have a better intuition about the power of the BRAE model at learning semantic phrase embeddings, we show some examples in Table 3. Given the BRAE model and the phrase training set, we search from the set the most semantically similar English phrases for any new input English phrase. The input phrases contain different number of words. The table shows that the unsupervised RAE can at most capture the syntactic property when the phrases are short. For example, the unsupervised RAE finds do not want for the input phrase do not agree. When the phrase becomes longer, the unsupervised RAE cannot even capture the syntactic property. In contrast, our BRAE model learns the semantic meaning for each phrase no matter whether it is short or relatively long. This indicates that the proposed BRAE model is effective at learning semantic phrase embeddings. 5 Discussions 5.1 Applications of The BRAE model As the semantic phrase embedding can fully represent the phrase, we can go a step further in the phrase-based SMT and feed the semantic phrase embeddings to DNN in order to model the whole translation process (e.g. derivation structure prediction). We will explore this direction in our future work. Besides SMT, the semantic phrase embeddings can be used in other cross-lingual tasks, such as cross-lingual question answering, since the semantic similarity between phrases in different languages can be calculated accurately. In addition to the cross-lingual applications, we believe the BRAE model can be applied in many 118 Method n MT03 MT04 MT05 MT06 MT08 ALL Baseline 35.81 36.91 34.69 33.83 27.17 34.82 BRAE 50 36.43 37.64 35.35 35.53 28.59 35.84+ 100 36.45 37.44 35.58 35.42 28.57 36.03+ 200 36.34 37.35 35.78 34.87 27.84 35.62+ Table 2: Experimental results of decoding with phrasal semantic similarities. n is the embedding dimensionality. ”+” means that the model significantly outperforms the baseline with p < 0.01. New Phrase Unsupervised RAE BRAE military force core force military power main force military strength labor force armed forces at a meeting to a meeting at the meeting at a rate during the meeting a meeting , at the conference do not agree one can accept do not favor i can understand will not compromise do not want not to approve each people in this nation each country regards every citizen in this country each country has its all the people in the country each other , and people all over the country Table 3: Semantically similar phrases in the training set for the new phrases. monolingual NLP tasks which depend on good phrase representations or semantic similarity between phrases, such as named entity recognition, parsing, textual entailment, question answering and paraphrase detection. 5.2 Model Extensions In fact, the phrases having the same meaning are translation equivalents in different languages, but are paraphrases in one language. Therefore, our model can be easily adapted to learn semantic phrase embeddings using paraphrases. Our BRAE model still has some limitations. For example, as each node in the recursive autoencoder shares the same weight matrix, the BRAE model would become weak at learning the semantic representations for long sentences with tens of words. Improving the model to semantically embed sentences is left for our future work. 6 Conclusions and Future Work This paper has explored the bilinguallyconstrained recursive auto-encoders in learning phrase embeddings, which can distinguish phrases with different semantic meanings. With the objective to minimize the semantic distance between translation equivalents and maximize the semantic distance between non-translation pairs simultaneously, the learned model can semantically embed any phrase in two languages and can transform the semantic space in one language to the other. Two end-to-end SMT tasks are involved to test the power of the proposed model at learning the semantic phrase embeddings. The experimental results show that the BRAE model is remarkably effective in phrase table pruning and decoding with phrasal semantic similarities. We have also discussed many other potential applications and extensions of our BRAE model. In the future work, we will explore four directions. 1) we will try to model the decoding process with DNN based on our semantic embeddings of the basic translation units. 2) we are going to learn semantic phrase embeddings with the paraphrase corpus. 3) we will apply the BRAE model in other monolingual and cross-lingual tasks. 4) we plan to learn semantic sentence embeddings by automatically learning different weight matrices for different nodes in the BRAE model. Acknowledgments We thank Nan Yang for sharing the baseline code and anonymous reviewers for their valuable comments. The research work has been partially funded by the Natural Science Foundation of China under Grant No. 61333018 and 61303181, and Hi-Tech Research and Development Program (863 Program) of China under Grant No. 2012AA011102. 119 References Michael Auli, Michel Galley, Chris Quirk, and Geoffrey Zweig. 2013. Joint language and translation modeling with recurrent neural networks. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1044– 1054. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic language model. Journal of Machine Learning Research, 3:1137–1155. Yoshua Bengio, Holger Schwenk, Jean-S´ebastien Sen´ecal, Fr´ederic Morin, and Jean-Luc Gauvain. 2006. Neural probabilistic language models. In Innovations in Machine Learning, pages 137–186. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th international conference on Machine learning, pages 160–167. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. The Journal of Machine Learning Research, 12:2493–2537. George E Dahl, Dong Yu, Li Deng, and Alex Acero. 2012. Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. IEEE Transactions on Audio, Speech, and Language Processing, 20(1):30–42. Kevin Duh, Graham Neubig, Katsuhito Sudoh, and Hajime Tsukada. 2013. Adaptation data selection using neural language models: Experiments in machine translation. In 51st Annual Meeting of the Association for Computational Linguistics, pages 678– 683. Matthias Eck, Stephen Vogal, and Alex Waibel. 2007. Estimating phrase pair relevance for translation model pruning. In MTSummit XI. Jianfeng Gao, Xiaodong He, Wen-tau Yih, and Li Deng. 2013. Learning semantic representations for the phrase translation model. arXiv preprint arXiv:1312.0482. Christoph Goller and Andreas Kuchler. 1996. Learning task-dependent distributed representations by backpropagation through structure. In IEEE International Conference on Neural Networks, volume 1, pages 347–352. John Howard Johnson, Joel Martin, George Foster, and Roland Kuhn. 2007. Improving translation quality by discarding most of the phrasetable. In Proceedings of EMNLP. Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1700–1709. Koray Kavukcuoglu, Pierre Sermanet, Y-Lan Boureau, Karol Gregor, Micha¨el Mathieu, and Yann L Cun. 2010. Learning convolutional feature hierarchies for visual recognition. In Advances in neural information processing systems, pages 1090–1098. Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of EMNLP, pages 388–395. Alex Krizhevsky, Ilya Sutskever, and Geoff Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems 25, pages 1106–1114. Peng Li, Yang Liu, and Maosong Sun. 2013. Recursive autoencoders for itg-based translation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Wang Ling, Joao Grac¸a, Isabel Trancoso, and Alan Black. 2012. Entropy-based pruning for phrasebased machine translation. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 962–971. Lemao Liu, Taro Watanabe, Eiichiro Sumita, and Tiejun Zhao. 2013. Additive neural networks for statistical machine translation. In 51st Annual Meeting of the Association for Computational Linguistics, pages 791–801. Tomas Mikolov, Martin Karafi´at, Lukas Burget, Jan Cernock`y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In INTERSPEECH, pages 1045–1048. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. In Proceedings of NIPS. Richard Socher, Christopher D Manning, and Andrew Y Ng. 2010. Learning continuous phrase representations and syntactic parsing with recursive neural networks. In Proceedings of the NIPS-2010 Deep Learning and Unsupervised Feature Learning Workshop. Richard Socher, Jeffrey Pennington, Eric H Huang, Andrew Y Ng, and Christopher D Manning. 2011. Semi-supervised recursive autoencoders for predicting sentiment distributions. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 151–161. Richard Socher, John Bauer, Christopher D Manning, and Andrew Y Ng. 2013a. Parsing with compositional vector grammars. In Proceedings of ACL. 120 Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013b. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Nadi Tomeh, Nicola Cancedda, and Marc Dymetman. 2009. Complexity-based phrase-table filtering for statistical machine translation. In Proceedings of Summit XII, pages 144–151. Ashish Vaswani, Yinggong Zhao, Victoria Fossum, and David Chiang. 2013. Decoding with largescale neural language models improves translation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1387–1392. Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational linguistics, 23(3):377–403. Deyi Xiong, Qun Liu, and Shouxun Lin. 2006. Maximum entropy based phrase reordering model for statistical machine translation. In Proceedings of ACLCOLING, pages 505–512. Nan Yang, Shujie Liu, Mu Li, Ming Zhou, and Nenghai Yu. 2013. Word alignment modeling with context dependent deep neural network. In 51st Annual Meeting of the Association for Computational Linguistics. Richard Zens, Daisy Stanton, and Peng Xu. 2012. A systematic comparison of phrase table pruning techniques. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 972–983. Will Y Zou, Richard Socher, Daniel Cer, and Christopher D Manning. 2013. Bilingual word embeddings for phrase-based machine translation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1393–1398. 121
2014
11
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1166–1176, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Polylingual Tree-Based Topic Models for Translation Domain Adaptation Yuening Hu† Computer Science University of Maryland [email protected] Ke Zhai† Computer Science University of Maryland [email protected] Vladimir Eidelman FiscalNote Inc. Washington DC [email protected] Jordan Boyd-Graber iSchool and UMIACS University of Maryland [email protected] Abstract Topic models, an unsupervised technique for inferring translation domains improve machine translation quality. However, previous work uses only the source language and completely ignores the target language, which can disambiguate domains. We propose new polylingual tree-based topic models to extract domain knowledge that considers both source and target languages and derive three different inference schemes. We evaluate our model on a Chinese to English translation task and obtain up to 1.2 BLEU improvement over strong baselines. 1 Introduction Probabilistic topic models (Blei and Lafferty, 2009), exemplified by latent Dirichlet allocation (Blei et al., 2003, LDA), are one of the most popular statistical frameworks for navigating large unannotated document collections. Topic models discover—without any supervision—the primary themes presented in a dataset: the namesake topics. Topic models have two primary applications: to aid human exploration of corpora (Chang et al., 2009) or serve as a low-dimensional representation for downstream applications. We focus on the second application, which has been fruitful for computer vision (Li Fei-Fei and Perona, 2005), computational biology (Perina et al., 2010), and information retrieval (Kataria et al., 2011). In particular, we use topic models to aid statistical machine translation (Koehn, 2009, SMT). Modern machine translation systems use millions of examples of translations to learn translation rules. These systems work best when the training corpus has consistent genre, register, and topic. Systems that are robust to systematic variation in the training set are said to exhibit domain adaptation. † indicates equal contributions. As we review in Section 2, topic models are a promising solution for automatically discovering domains in machine translation corpora. However, past work either relies solely on monolingual source-side models (Eidelman et al., 2012; Hasler et al., 2012; Su et al., 2012), or limited modeling of the target side (Xiao et al., 2012). In contrast, machine translation uses inherently multilingual data: an SMT system must translate a phrase or sentence from a source language to a different target language, so existing applications of topic models (Eidelman et al., 2012) are wilfully ignoring available information on the target side that could aid domain discovery. This is not for a lack of multilingual topic models. Topic models bridge the chasm between languages using document connections (Mimno et al., 2009), dictionaries (Boyd-Graber and Resnik, 2010), and word alignments (Zhao and Xing, 2006). In Section 2, we review these models for discovering topics in multilingual datasets and discuss how they can improve SMT. However, no models combine multiple bridges between languages. In Section 3, we create a model—the polylingual tree-based topic models (ptLDA)—that uses information from both external dictionaries and document alignments simultaneously. In Section 4, we derive both MCMC and variational inference for this new topic model. In Section 5, we evaluate our model on the task of SMT using aligned datasets. We show that ptLDA offers better domain adaptation than other topic models for machine translation. Finally, in Section 6, we show how these topic models improve SMT with detailed examples. 2 Topic Models for Machine Translation Before considering past approaches using topic models to improve SMT, we briefly review lexical weighting and domain adaptation for SMT. 1166 2.1 Statistical Machine Translation Statistical machine translation casts machine translation as a probabilistic process (Koehn, 2009). For a parallel corpus of aligned source and target sentences (F, E), a phrase ¯f ∈F is translated to a phrase ¯e ∈E according to a distribution pw(¯e| ¯f). One popular method to estimate the probability pw(¯e| ¯f) is via lexical weighting features. Lexical Weighting In phrase-based SMT, lexical weighting features estimate the phrase pair quality by combining lexical translation probabilities of words in a phrase (Koehn et al., 2003). Lexical conditional probabilities p(e|f) are maximum likelihood estimates from relative lexical frequencies c(f, e)/P e c(f, e), where c(f, e) is the count of observing lexical pair (f, e) in the training dataset. The phrase pair probabilities pw(¯e| ¯f) are the normalized product of lexical probabilities of the aligned word pairs within that phrase pair (Koehn et al., 2003). In Section 2.2, we create topic-specific lexical weighting features. Cross-Domain SMT A SMT system is usually trained on documents with the same genre (e.g., sports, business) from a similar style (e.g., newswire, blog-posts). These are called domains. Translations within one domain are better than translations across domains since they vary dramatically in their word choices and style. A correct translation in one domain may be inappropriate in another domain. For example, “潜水” in a newspaper usually means “underwater diving”. On social media, it means a non-contributing “lurker”. Domain Adaptation for SMT Training a SMT system using diverse data requires domain adaptation. Early efforts focus on building separate models (Foster and Kuhn, 2007) and adding features (Matsoukas et al., 2009) to model domain information. Chiang et al. (2011) combine these approaches by directly optimizing genre and collection features by computing separate translation tables for each domain. However, these approaches treat domains as hand-labeled, constant, and known a priori. This setup is at best expensive and at worst infeasible for large data. Topic models provide a solution where domains can be automatically induced from raw data: treat each topic as a domain.1 1Henceforth we will use the term “topic” and “domain” interchangeably: “topic” to refer to the concept in topic models and “domain” to refer to SMT corpora. 2.2 Inducing Domains with Topic Models Topic models take the number of topics K and a collection of documents as input, where each document is a bag of words. They output two distributions: a distribution over topics for each document d; and a distribution over words for each topic. If each topic defines a SMT domain, the document’s topic distribution is a soft domain assignment for that document. Given the soft domain assignments, Eidelman et al. (2012) extract lexical weighting features conditioned on the topics, optimizing feature weights using the Margin Infused Relaxed Algorithm (Crammer et al., 2006, MIRA). The topics come from source documents only and create topic-specific lexical weights from the per-document topic distribution p(k | d). The lexical probability conditioned on the topic is expected count ek(e, f) of a word translation pair under topic k, ˆck(e, f) = P d p(k|d)cd(e, f), (1) where cd(•) is the number of occurrences of the word pair in document d. The lexical probability conditioned on topic k is the unsmoothed probability estimate of those expected counts pw(e|f; k) = ˆck(e,f) P e ˆck(e,f), (2) from which we can compute the phrase pair probabilities pw(¯e| ¯f; k) by multiplying the lexical probabilities and normalizing as in Koehn et al. (2003). For a test document d, the document topic distribution p(k | d) is inferred based on the topics learned from training data. The feature value of a phrase pair (¯e, ¯f) is fk(¯e| ¯f) = −log  pw(¯e| ¯f; k) · p(k|d) , (3) a combination of the topic dependent lexical weight and the topic distribution of the document, from which we extract the phrase. Eidelman et al. (2012) compute the resulting model score by combining these features in a linear model with other standard SMT features and optimizing the weights. Conceptually, this approach is just reweighting examples. The probability of a topic given a document is never zero. Every translation observed in the training set will contribute to pk(e|f); many of the expected counts, however, will be less than one. This obviates the explicit smoothing used in other domain adaptation systems (Chiang et al., 2011). 1167 We adopt this framework in its entirety. Our contribution are topics that capture multilingual information and thus better capture the domains in the parallel corpus. 2.3 Beyond Vanilla Topic Models Eidelman et al. (2012) ignore a wealth of information that could improve topic models and help machine translation. Namely, they only use monolingual data from the source language, ignoring all target-language data and available lexical semantic resources between source and target languages. Different complement each other to reduce ambiguity. For example, “木马” in a Chinese document can be either “hobbyhorse” in a children’s topic, or “Trojan virus” in a technology topic. A short Chinese context obscures the true topic. However, these terms are unambiguous in English, revealing the true topic. While vanilla topic models (LDA) can only be applied to monolingual data, there are a number of topic models for parallel corpora: Zhao and Xing (2006) assume aligned word pairs share same topics; Mimno et al. (2009) connect different languages through comparable documents. These models take advantage of word or document alignment information and infer more robust topics from the aligned dataset. On the other hand, lexical information can induce topics from multilingual corpora. For instance, orthographic similarity connects words with the same meaning in related languages (BoydGraber and Blei, 2009), and dictionaries are a more general source of information on which words share meaning (Boyd-Graber and Resnik, 2010). These two approaches are not mutually exclusive, however; they reveal different connections across languages. In the next section, we combine these two approaches into a polylingual tree-based topic model. 3 Polylingual Tree-based Topic Models In this section, we bring existing tree-based topic models (Boyd-Graber et al., 2007, tLDA) and polylingual topic models (Mimno et al., 2009, pLDA) together and create the polylingual treebased topic model (ptLDA) that incorporates both word-level correlations and document-level alignment information. Word-level Correlations Tree-based topic models incorporate the correlations between words by encouraging words that appear together in a concept to have similar probabilities given a topic. These concepts can come from WordNet (BoydGraber and Resnik, 2010), domain experts (Andrzejewski et al., 2009), or user constrains (Hu et al., 2013). When we gather concepts from bilingual resources, these concepts can connect different languages. For example, if a bilingual dictionary defines “电脑” as “computer”, we combine these words in a concept. We organize the vocabulary in a tree structure based on these concepts (Figure 1): words in the same concept share a common parent node, and then that concept becomes one of many children of the root node. Words that are not in any concept— uncorrelated words—are directly connected to the root node. We call this structure the tree prior. When this tree serves as a prior for topic models, words in the same concept are correlated in topics. For example, if “电脑” has high probability in a topic, so will “computer”, since they share the same parent node. With the tree priors, each topic is no longer a distribution over word types, instead, it is a distribution over paths, and each path is associated with a word type. The same word could appear in multiple paths, and each path represents a unique sense of this word. Document-level Alignments Lexical resources connect languages and help guide the topics. However, these resources are sometimes brittle and may not cover the whole vocabulary. Aligned document pairs provide a more corpus-specific, flexible association across languages. Polylingual topic models (Mimno et al., 2009) assume that the aligned documents in different languages share the same topic distribution and each language has a unique topic distribution over its word types. This level of connection between languages is flexible: instead of requiring the exact matching on words and sentences, only a coarse document alignment is necessary, as long as the documents discuss the same topics. Combine Words and Documents We propose polylingual tree-based topic models (ptLDA), which connect information across different languages by incorporating both word correlation (as in tLDA) and document alignment information (as in pLDA). We initially assume a given tree structure, deferring the tree’s provenance to the end of this section. 1168 Generative Process As in LDA, each word token is associated with a topic. However, tree-based topic models introduce an additional step of selecting a concept in a topic responsible for generating each word token. This is represented by a path yd,n through the topic’s tree. The probability of a path in a topic depends on the transition probabilities in a topic. Each concept i in topic k has a distribution over its children nodes is governed by a Dirichlet prior: πk,i ∼Dir(βi). Each path ends in a word (i.e., a leaf node) and the probability of a path is the product of all of the transitions between topics it traverses. Topics have correlations over words because the Dirichlet parameters can encode positive or negative correlations (Andrzejewski et al., 2009). With these correlated in topics in hand, the generation of documents are very similar to LDA. For every document d, we first sample a distribution over topics θd from a Dirichlet prior Dir(α). For every token in the documents, we first sample a topic zdn from the multinomial distribution θd, and then sample a path ydn along the tree according to the transition distributions specified by topic zdn. Because every path ydn leads to a word wdn in language ldn, we append the sampled word wdn to document dldn. Aligned documents have words in both languages; monolingual documents only have words in a single language. The full generative process is: 1: for topic k ∈1, · · · , K do 2: for each internal node ni do 3: draw a distribution πki ∼Dir(βi) 4: for document set d ∈1, · · · , D do 5: draw a distribution θd ∼Dir(α) 6: for each word in documents d do 7: choose a topic zdn ∼Mult(θd) 8: sample a path ydn with probability Q (i,j)∈ydn πzdn,i,j 9: ydn leads to word wdn in language ldn 10: append token wdn to document dldn If we use a flat symmetric Dirichlet prior instead of the tree prior, we recover pLDA; and if all documents are monolingual (i.e., with distinct distributions over topics θ), we recover tLDA. ptLDA connects different languages on both the word level (using the word correlations) and the document level (using the document alignments). We compare these models’ machine translation performance in Section 5. computer,  market, 市 government, 政府 science, 科学 Dictionary: Vocabulary: English (0), Chinese (1) computer  market 市 government 政府science 科学 天气 scientific policy 0 scientific 0 policy 1  1 市 0 computer 0 market 0 government 0 science 1 政府 1 科学 1 天气 Prior Tree: 0 1 Figure 1: An example of constructing a prior tree from a bilingual dictionary: word pairs with the same meaning but in different languages are concepts; we create a common parent node to group words in a concept, and then connect to the root; uncorrelated words are connected to the root directly. Each topic uses this tree structure as a prior. Build Prior Tree Structures One remaining question is the source of the word-level connections across languages for the tree prior. We consider two resources to build trees that correlate words across languages. The first are a multilingual dictionaries (dict), which match words with the same meaning in different languages together. These relations between words are used as the concepts in the prior tree (Figure 1). In addition, we extract the word alignments from aligned sentences in a parallel corpus. The word pairs define concepts for the prior tree (align). We use both resources for our models (denoted as ptLDA-dict and ptLDA-align) in our experiments (Section 5) and show that they yield comparable performance in SMT. 4 Inference Inference of probabilistic models discovers the posterior distribution over latent variables. For a collection of D documents, each of which contains Nd number of words, the latent variables of ptLDA are: transition distributions πki for every topic k and internal node i in the prior tree structure; multinomial distributions over topics θd for every document d; topic assignments zdn and path ydn for the nth word wdn in document d. The joint distribution of polylingual tree-based topic models is p(w, z, y, θ, π; α, β) = Q k Q i p(πki|βi) (4) · Q d p(θd|α) · Q d Q n p(zdn|θd) · Q d Q n p(ydn|zdn, π)p(wdn|ydn)  . Exact inference is intractable, so we turn to ap1169 proximate posterior inference to discover the latent variables that best explain our data. Two widely used approximation approaches are Markov chain Monte Carlo (Neal, 2000, MCMC) and variational Bayesian inference (Blei et al., 2003, VB). Both frameworks produce good approximations of the posterior mode (Asuncion et al., 2009). In addition, Mimno et al. (2012) propose hybrid inference that takes advantage of parallelizable variational inference for global variables (Wolfe et al., 2008) while enjoying the sparse, efficient updates for local variables (Neal, 1993). In the rest of this section, we discuss all three methods in turn. We explore multiple inference schemes because while all of these methods optimize likelihood because they might give different results on the translation task. 4.1 Markov Chain Monte Carlo Inference We use a collapsed Gibbs sampler for tree-based topic models to sample the path ydn and topic assignment zdn for word wdn, p(zdn = k, ydn = s|¬zdn, ¬ydn, w; α, β) ∝I [Ω(s) = wdn] · Nk|d+α P k′(Nk′|d+α) · Q i→j∈s Ni→j|k+βi→j P j′(Ni→j′|k+βi→j′), where Ω(s) represents the word that path s leads to, Nk|d is the number of tokens assigned to topic k in document d and Ni→j|k is the number of times edge i →j in the tree assigned to topic k, excluding the topic assignment zdn and its path ydn of current token wdn. In practice, we sample the latent variables using efficient sparse updates (Yao et al., 2009; Hu and Boyd-Graber, 2012). 4.2 Variational Bayesian Inference Variational Bayesian inference approximates the posterior distribution with a simplified variational distribution q over the latent variables: document topic proportions θ, transition probabilities π, topic assignments z, and path assignments y. Variational distributions typically assume a mean-field distribution over these latent variables, removing all dependencies between the latent variables. We follow this assumption for the transition probabilities q(π | λ) and the document topic proportions q(θ | γ); both are variational Dirichlet distributions. However, due to the tight coupling between the path and topic variables, we must model this joint distribution as one multinomial, q(z, y | φ). If word token wdn has K topics and S paths, it has a K ∗S length variational multinomial φdnks, which represents the probability that the word takes path s in topic k. The complete variational distribution is q(θ, π, z, y|γ, λ, φ) = Q d q(θd|γd)· (5) Q k Q i q(πki|λki) · Q d Q n q(zdn, ydn|φdn). Our goal is to find the variational distribution q that is closest to the true posterior, as measured by the Kullback-Leibler (KL) divergence between the true posterior p and variational distribution q. This induces an “evidence lower bound” (ELBO, L) as a function of a variational distribution q: L = Eq[log p(w, z, y, θ, π)] −Eq[log q(θ, π, z, y)] = P k P i Eq[log p(πki|βi)] + P d Eq[log p(θd|α)] + P d P n Eq[log p(zdn, ydn|θd, π)p(wdn|ydn)] + H[q(θ)] + H[q(π)] + H[q(z, y)], (6) where H[•] represents the entropy of a distribution. Optimizing L using coordinate descent provides the following updates: φdnkt ∝exp{Ψ(γdk) −Ψ(P k γdk) (7) + P i→j∈s Ψ(λk,i→j) −Ψ(P j′ λk,i→j′)  }; γdk = αk + P n P s∈Ω−1(wdn) φdnkt; (8) λk,i→j = βi→j (9) + P d P n P s∈Ω′(wdn) φdnktI [i →j ∈s] ; where Ω′(wdn) is the set of all paths that lead to word wdn in the tree, and t represents one particular path in this set. I [i →j ∈s] is the indicator of whether path s contains an edge from node i to j. 4.3 Hybrid Stochastic Inference Given the complementary strengths of MCMC and VB, and following hybrid inference proposed by Mimno et al. (2012), we also derive hybrid inference for ptLDA. The transition distributions π are treated identically as in variational inference. We posit a variational Dirichlet distribution λ and choose the one that minimizes the KL divergence between the true posterior and the variational distribution. For topic z and path y, instead of variational updates, we use a Gibbs sampler within a document. We sample zdn and ydn conditioned on the topic 1170 and path assignments of all other document tokens, based on the variational expectation of π, q(zdn = k, ydn = s|¬zdn, ¬ydn; w) ∝ (10) (α + P m̸=n I [zdm = k]) · exp{Eq[log p(ydn|zdn, π)p(wdn|ydn)]}. This equation embodies how this is a hybrid algorithm: the first term resembles the Gibbs sampling term encoding how much a document prefers a topic, while the second term encodes the expectation under the variational distribution of how much a path is preferred by this topic, Eq[log p(ydn|zdn, π)p(wdn|ydn)] = I[Ω(ydn)=wdn] · P i→j∈ydn Eq[log λzdn,i→j]. For every document, we sweep over all its tokens and resample their topic zdn and path ydn conditioned on all the other tokens’ topic and path assignments ¬zdn and ¬ydn. To avoid bias, we discard the first B burn-in sweeps and take the following M samples. We then use the empirical average of these samples update the global variational parameter q(π|λ) based on how many times we sampled these paths λk,i→j = 1 M P d P n P s∈Ω−1(wdn) I [i →j ∈s] · I [zdn = k, ydn = s]  + βi→j. (11) For our experiments, we use the recommended settings B = 5 and M = 5 from Mimno et al. (2012). 5 Experiments We evaluate our new topic model, ptLDA, and existing topic models—LDA, pLDA, and tLDA—on their ability to induce domains for machine translation and the resulting performance of the translations on standard machine translation metrics. Dataset and SMT Pipeline We use the NIST MT Chinese-English parallel corpus (NIST), excluding non-UN and non-HK Hansards portions as our training dataset. It contains 1.6M sentence pairs, with 40.4M Chinese tokens and 44.4M English tokens. We replicate the SMT pipeline of Eidelman et al. (2012): word segmentation (Tseng et al., 2005), align (Och and Ney, 2003), and symmetrize (Koehn et al., 2003) the data. We train a modified KneserNey trigram language model on English (Chen and Goodman, 1996). We use CDEC (Dyer et al., 2010) for decoding, and MIRA (Crammer et al., 2006) for parameter training. To optimize SMT system, we tune the parameters on NIST MT06, and report results on three test sets: MT02, MT03 and MT05.2 Topic Models Configuration We compare our polylingual tree-based topic model (ptLDA) against tree-based topic models (tLDA), polylingual topic models (pLDA) and vanilla topic models (LDA).3 We also examine different inference algorithms— Gibbs sampling (gibbs), variational inference (variational) and hybrid approach (variationalhybrid)—on the effects of SMT performance. In all experiments, we set the per-document Dirichlet parameter α = 0.01 and the number of topics to 10, as used in Eidelman et al. (2012). Resources for Prior Tree To build the tree for tLDA and ptLDA, we extract the word correlations from a Chinese-English bilingual dictionary (Denisowski, 1997).4 We filter the dictionary using the NIST vocabulary, and keep entries mapping single Chinese and single English words. The prior tree has about 1000 word pairs (dict). We also extract the bidirectional word alignments between Chinese and English using GIZA++ (Och and Ney, 2003). We then remove the word pairs appearing more than 50K times or fewer than 500 times and construct a second prior tree with about 2500 word pairs (align). We apply both trees to tLDA and ptLDA, denoted as tLDA-dict, tLDA-align, ptLDA-dict, and ptLDAalign. However, tLDA-align and ptLDA-align do worse than tLDA-dict and ptLDA-dict, so we omit tLDA-align in the results. Domain Adaptation using Topic Models We examine the effectiveness of using topic models for domain adaptation on standard SMT evaluation metrics—BLEU (Papineni et al., 2002) and TER (Snover et al., 2006). We report the results on three different test sets (Figure 2), and all SMT results are averaged over five runs. We refer to the SMT model without domain adaptation as baseline.5 LDA marginally improves machine translation (less than half a BLEU point). 2The NIST datasets contain 878, 919, 1082 and 1664 sentences for MT02, MT03, MT05 and MT06 respectively. 3For Gibbs sampling, we use implementations available in Hu and Boyd-Graber (2012) for tLDA; and Mallet (McCallum, 2002) for LDA and pLDA. 4This is a two-level tree structure. However, one could build a more sophisticated tree prior with a hierarchical dictionary such as multilingual WordNet. 5Our replication of Eidelman et al. (2012) yields slightly higher baseline performance, but the trend is consistent. 1171 gibbs variational variational−hybrid 34.8 +0.3 +0.6 +0.4 +1.2 +0.5 35.1 +0.1 +0.3 +0.2 +0.7 +0.4 31.4 +0.4 +0.7 +0.4 +1 +0.4 34.8 +0.4 +0.5 +0.4 +0.8 +0.5 35.1 −0.1 +0.2 −0.1 +0.2 +0.2 31.4 +0.3 +0.5 +0.3 +0.8 +0.4 34.8 +0.2 +0.4 +0.2 +0.7 +0.4 35.1 −0.1 −0.1 −0.1 +0.2 +0.2 31.4 +0.3 +0.3 +0.1 +0.6 +0.3 31 32 33 34 35 36 37 31 32 33 34 35 36 37 31 32 33 34 35 36 37 mt02 mt03 mt05 BLEU Score model baseline LDA pLDA ptLDA−align ptLDA−dict tLDA−dict gibbs variational variational−hybrid 61.9 −0.1 −1 −1.2 −2.5 −1.1 60.1 −0.3 −0.9 −0.8 −1.9 −0.9 63.3 −0.9 −1.3 −1.2 −2.6 −1.1 61.9 −0.4 −1 −0.6 −1.6 −1.3 60.1 −0.2 −0.5 −0.1 −1 −0.7 63.3 −0.5 −1 −0.4 −1.5 −1.2 61.9 −0.3 −0.7 −0.1 −1.6 −0.9 60.1 0 −0.2 +0.2 −1.1 −0.5 63.3 −0.4 −0.7 −0.1 −1.6 −0.8 56 58 60 62 64 66 56 58 60 62 64 66 56 58 60 62 64 66 mt02 mt03 mt05 TER Score model baseline LDA pLDA ptLDA−align ptLDA−dict tLDA−dict Figure 2: Machine translation performance for different models and inference algorithms against the baseline, on BLEU (top, higher the better) and TER (bottom, lower the better) scores. Our proposed ptLDA performs best. Results are averaged over 5 random runs. For model ptLDA-dict with different inference schemes, the BLEU improvement on three test sets is mostly significant with p = 0.01, except the results on MT03 using variational and variational-hybrid inferences. Polylingual topic models pLDA and tree-based topic models tLDA-dict are consistently better than LDA, suggesting that incorporating additional bilingual knowledge improves topic models. These improvements are not redundant: our new ptLDA-dict model, which has aspects of both models yields the best performance among these approaches—up to a 1.2 BLEU point gain (higher is better), and -2.6 TER improvement (lower is better). The BLEU improvement is significant (Koehn, 2004) at p = 0.01,6 except on MT03 with variational and variationalhybrid inference. While ptLDA-align performs better than baseline SMT and LDA, it is worse than ptLDA-dict, possibly because of errors in the word alignments, making the tree priors less effective. Scalability While gibbs has better translation scores than variational and variational-hybrid, it is less scalable to larger datasets. With 1.6M NIST 6Because we have multiple runs of each topic model (and thus different translation models), we select the run closest to the average BLEU for the translation significance test. training sentences, gibbs takes nearly a week to run 1000 iterations. In contrast, the parallelized variational and variational-hybrid approaches, which we implement in MapReduce (Dean and Ghemawat, 2004; Wolfe et al., 2008; Zhai et al., 2012), take less than a day to converge. 6 Discussion In this section, we qualitatively analyze the translation results and investigate how ptLDA and its cousins improve SMT. We also discuss other approaches to improve unsupervised domain adaptation for SMT. 6.1 How do Topic Models Help SMT? We present two examples of how topic models can improve SMT. The first example shows both LDA and ptLDA improve the baseline. The second example shows how LDA introduce biases that mislead SMT and how ptLDA’s bilingual constraints correct these mistakes. Figure 3 shows a sentence about a company 1172 source 新力已在北美地区售出大    , 每套售价 reference sony has already sold about 570,000 units of narrowband connection kits in north america at the price of about 39 us dollars and some 20 compatible games . baseline LDA ptLDA … internet links set ... … internet links kit … … internet links kit … … with about 20 of the game . … , there are about 20 compatible games . … , there are about 20 compatible games . source …   ... … 相容游    LDA-Topic 0 (business) ptLDA-Topic 0 (business) reference … connection kits ... … some 20 compatible games .   , 相容游    公司(company), 中国(China), 服(service), 市 (market), 技 (technology), 企(industry), 提供 (provide), (develop), 年(year),  (product), 上, 合作(coorporate), 中, 管理(manage), 投 (invest), (economy), 国(international), 系 (system), (bank) 公司(company), 服(service), 市(market), 技 (technology), china, 企(industry),  (product), market, company, technology, services, 系 (system), year, industry, products, business,  (economy), information, 管理(manage), 投 (invest), percent, 网 (internet), companies, world, system, 信息(information), 增(increase),  (device), service, (service) Figure 3: Better SMT result using topic models for domain adaptation. Top row: the source sentence and its reference translation. Middle row: the highlighted translations from different approaches. Bottom row: the change of relevant translation probabilities after incorporating the domain knowledge from LDA and ptLDA. Right: most-probable words of the topic the source sentence is assigned to under LDA (top) and ptLDA (bottom). The Chinese translations are in parenthesis. introducing new technology gadgets where both LDA and ptLDA improve translations. The baseline translates “套件” to “set” (red), and “相容” to “with” (blue), which do not capture the reference meaning of a add-on device that works with compatible games. Both LDA and ptLDA assign this sentence to a business domain, which makes the translations probabilities shift toward correct translations: the probability of translating “相容” to “compatible” and the probability of translating “套 件” to “kit” in the business domain are both significantly larger than without the domain knowledge; and the probabilities of translating “相容” to “with” and the probability of translating “set” to “套件” in the business domain decrease. The second example (Figure 4) illustrates how ptLDA offers further improvements over LDA. The source sentence discusses foreign affairs. The baseline correctly translates the word “影响” to “affect”. However, LDA—which only takes monolingual information from the source language— assigns this sentence to economic development. This misleads SMT to lower the probability for the correct translation “affect”; it chooses “impact” instead. In contrast, ptLDA—which incorporates bilingual constraints—successfully labels this sentence as foreign affairs and produces a softer, more nuanced translation that better matches the reference. The translation of “承诺” is very similar, except in this case, both the baseline and LDA produce the incorrect translation “the commitment of”. This is possible because the probabilities of translating “承诺” to “promised to” and translating “promised to” to “承诺” (the correct translation, in both directions) increase when conditioned on ptLDA’s correct topic but decrease when conditioned on LDA’s incorrect topic. 6.2 Other Approaches Other approaches have used topic models for machine translation. Xiao et al. (2012) present a topic similarity model based on LDA that produces a feature that weights grammar rules based on topic compatibility. They also model the source and target side of rules and compare the target similarity during decoding by projecting the target distribution into the source space. Hasler et al. (2012) use the source-side topic assignments from hidden topic Markov models (Gruber et al., 2007, HTMM) which models documents as a Markov chain and assign one topic to the whole sentence, instead of a mixture of topics. Su et al. (2012) also apply HTMM to monolingual data and apply the results to machine translation. To our knowledge, however, this is the first work to use multilingual topic models for domain adaptation in machine translation. 6.3 Improving Language Models Topic models capture document-level properties of language, but a critical component of machine translation systems is the language model, which provides local constraints and preferences. Domain adaptation for language models (Bellegarda, 2004; Wood and Teh, 2009) is an important avenue for improving machine translation. Models that simultaneously discover global document themes as well as local, contextual domain-specific informa1173 source 消息指出, 国使人向中方官表示, 国方面并没有支持朝人以种方法前往国, 国并不希望类事件再次 生, 以免中国和朝半双方 的关系来影响, 国方面并向中国方面承, 愿意助中国管理好在京的国居民 reference sources said rok embassy personnel told chinese officials that rok has not backed any dpr koreans to get to rok in such a manner and rok would not like such things happen again to affect relationship between china and the two sides of the korean peninsula . rok also promised to assist china in the administration of koreans in beijing . baseline LDA ptLDA … does not want ... … does not hope that ... … does not hope that ... source … 不希望 ... LDA-Topic 5 (economic development) ptLDA-Topic 2 (foreign affairs) … so as to avoid impact the relations… … so as not to affect the relations… … so as not to affect the relations… … south korea and the commitment of the chinese side ... … the rok side , and the commitment of the chinese side ... … south korea has promised to the chinese side ... … 以免...关系来影响... … 国方面并向中国方面承… reference … would not like ... … to affect the relationship … … rok also promised to the chinese side ... (develop), 国(country), 两(two), 中国(China), 关系(relation), 中, 合作(cooperate), (economy), 人民(people), 友好(friendly), 国家(country), 新(new), (problem), 上, 加强(emphasize), 重要 (important), 和平(peace), 共同(together), 建(build), 世界(world) china, (issue), military, united, president, 国家(country), 地区(area), minister, 伊 拉克(Iraq), 和平(peace), nuclear, people, (president), peace, security,   (UN), (military), 以色列(Israel), iraq, foreign, international, 部(army), beijing, world, defense, south, 安 全(security), war, (agreement), 会(conference) Figure 4: Better SMT result using ptLDA compared to LDA and the baseline. Top row: the source sentence and a reference translation. Second row: the highlighted translations from different models. Third row: the change of relevant translation probabilities after incorporating domain knowledge from LDA and ptLDA. Bottom row: most-probable words for the topics the source sentence is assigned to under LDA (left) and ptLDA (right). The meanings of Chinese words are in parenthesis. tion (Wallach, 2006; Boyd-Graber and Blei, 2008) may offer further improvements. 6.4 External Data The topic models presented here only require weak alignment between documents at the document level. Extending to larger datasets for learning topics is straightforward in principle. For example, ptLDA could learn domains from a much larger corpus like Wikipedia and then apply the extracted domains to machine translation data. However, this presents further challenges, as Wikipedia’s domains are not representative of newswire machine translation datasets; a flexible hierarchical topic model (Teh et al., 2006) would better distinguish useful domains from extraneous ones. 7 Conclusion Topic models generate great interest, but their use in “real world” applications still lags; this is particularly true for multilingual topic models. As topic models become more integrated in commonplace applications, their adoption, understanding, and robustness will improve. This paper contributes to the deeper integration of topic models into critical applications by presenting a new multilingual topic model, ptLDA, comparing it with other multilingual topic models on a machine translation task, and showing that these topic models improve machine translation. ptLDA models both source and target data to induce domains from both dictionaries and alignments. Further improvement is possible by incorporating topic models deeper in the decoding process and adding domain knowledge to the language model. Acknowledgments We would like to thank the anonymous reviewers, Doug Oard, and John Morgan for their helpful comments, and thank Junhui Li and Ke Wu for insightful discussions. This work was supported by NSF Grant IIS-1320538. Boyd-Graber is also supported by NSF Grant CCF-1018625. Any opinions, findings, conclusions, or recommendations expressed here are those of the authors and do not necessarily reflect the view of the sponsor. References David Andrzejewski, Xiaojin Zhu, and Mark Craven. 2009. Incorporating domain knowledge into topic modeling via Dirichlet forest priors. In Proceedings of the International Conference of Machine Learning. Arthur Asuncion, Max Welling, Padhraic Smyth, and Yee Whye Teh. 2009. On smoothing and inference for topic models. In Proceedings of Uncertainty in Artificial Intelligence. Jerome R. Bellegarda. 2004. Statistical language model adaptation: review and perspectives. volume 42, pages 93–108. 1174 David M. Blei and John D. Lafferty. 2009. Visualizing topics with Multi-Word expressions. arXiv. David M. Blei, Andrew Ng, and Michael Jordan. 2003. Latent Dirichlet allocation. Journal of Machine Learning Research, 3. Jordan Boyd-Graber and David M. Blei. 2008. Syntactic topic models. In Proceedings of Advances in Neural Information Processing Systems. Jordan Boyd-Graber and David M. Blei. 2009. Multilingual topic models for unaligned text. In Proceedings of Uncertainty in Artificial Intelligence. Jordan Boyd-Graber and Philip Resnik. 2010. Holistic sentiment analysis across languages: Multilingual supervised latent Dirichlet allocation. In Proceedings of Emperical Methods in Natural Language Processing. Jordan Boyd-Graber, David M. Blei, and Xiaojin Zhu. 2007. A topic model for word sense disambiguation. In Proceedings of Emperical Methods in Natural Language Processing. Jonathan Chang, Jordan Boyd-Graber, Chong Wang, Sean Gerrish, and David M. Blei. 2009. Reading tea leaves: How humans interpret topic models. In Proceedings of Advances in Neural Information Processing Systems. Stanley F. Chen and Joshua Goodman. 1996. An empirical study of smoothing techniques for language modeling. In Proceedings of the Association for Computational Linguistics. David Chiang, Steve DeNeefe, and Michael Pust. 2011. Two easy improvements to lexical weighting. In Proceedings of the Human Language Technology Conference. Koby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev-Shwartz, and Yoram Singer. 2006. Online passive-aggressive algorithms. Journal of Machine Learning Research, 7:551–585. Jeffrey Dean and Sanjay Ghemawat. 2004. MapReduce: Simplified data processing on large clusters. In Symposium on Operating System Design and Implementation. Paul Denisowski. 1997. CEDICT. http://www.mdbg.net/chindict/. Chris Dyer, Adam Lopez, Juri Ganitkevitch, Jonathan Weese, Ferhan Ture, Phil Blunsom, Hendra Setiawan, Vladimir Eidelman, and Philip Resnik. 2010. cdec: A decoder, alignment, and learning framework for finite-state and context-free translation models. In Proceedings of ACL System Demonstrations. Vladimir Eidelman, Jordan Boyd-Graber, and Philip Resnik. 2012. Topic models for dynamic translation model adaptation. In Proceedings of the Association for Computational Linguistics. George Foster and Roland Kuhn. 2007. Mixturemodel adaptation for smt. In Proceedings of the Second Workshop on Statistical Machine Translation. Amit Gruber, Michael Rosen-Zvi, and Yair Weiss. 2007. Hidden topic Markov models. In Artificial Intelligence and Statistics. Eva Hasler, Barry Haddow, and Philipp Koehn. 2012. Sparse lexicalised features and topic adaptation for SMT. In Proceedings of IWSLT. Yuening Hu and Jordan Boyd-Graber. 2012. Efficient tree-based topic modeling. In Proceedings of the Association for Computational Linguistics. Yuening Hu, Jordan Boyd-Graber, Brianna Satinoff, and Alison Smith. 2013. Interactive topic modeling. Machine Learning Journal. Saurabh S. Kataria, Krishnan S. Kumar, Rajeev R. Rastogi, Prithviraj Sen, and Srinivasan H. Sengamedu. 2011. Entity disambiguation with hierarchical topic models. In Knowledge Discovery and Data Mining. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Conference of the North American Chapter of the Association for Computational Linguistics. Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of Emperical Methods in Natural Language Processing. Philipp Koehn. 2009. Statistical Machine Translation. Cambridge University Press. Li Fei-Fei and Pietro Perona. 2005. A Bayesian hierarchical model for learning natural scene categories. In Computer Vision and Pattern Recognition. Spyros Matsoukas, Antti-Veikko I. Rosti, and Bing Zhang. 2009. Discriminative corpus weight estimation for machine translation. In Proceedings of Emperical Methods in Natural Language Processing. Andrew Kachites McCallum. 2002. Mallet: A machine learning for language toolkit. http://www.cs.umass.edu/ mccallum/mallet. David Mimno, Hanna Wallach, Jason Naradowsky, David Smith, and Andrew McCallum. 2009. Polylingual topic models. In Proceedings of Emperical Methods in Natural Language Processing. David Mimno, Matthew Hoffman, and David Blei. 2012. Sparse stochastic inference for latent Dirichlet allocation. In Proceedings of the International Conference of Machine Learning. Radford M. Neal. 1993. Probabilistic inference using Markov chain Monte Carlo methods. Technical Report CRG-TR-93-1, University of Toronto. 1175 Radford M. Neal. 2000. Markov chain sampling methods for Dirichlet process mixture models. Journal of Computational and Graphical Statistics, 9(2):249– 265. Franz Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. In Computational Linguistics, volume 29(21), pages 19–51. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the Association for Computational Linguistics, pages 311–318. Alessandro Perina, Pietro Lovato, Vittorio Murino, and Manuele Bicego. 2010. Biologically-aware latent Dirichlet allocation (balda) for the classification of expression microarray. In Proceedings of the 5th IAPR international conference on Pattern recognition in bioinformatics, PRIB’10. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In In Proceedings of Association for Machine Translation in the Americas. Jinsong Su, Hua Wu, Haifeng Wang, Yidong Chen, Xiaodong Shi, Huailin Dong, and Qun Liu. 2012. Translation model adaptation for statistical machine translation with monolingual topic information. In Proceedings of the Association for Computational Linguistics. Yee Whye Teh, Michael I. Jordan, Matthew J. Beal, and David M. Blei. 2006. Hierarchical Dirichlet processes. Journal of the American Statistical Association, 101(476):1566–1581. Huihsin Tseng, Pichuan Chang, Galen Andrew, Daniel Jurafsky, and Christopher Manning. 2005. A conditional random field word segmenter. In SIGHAN Workshop on Chinese Language Processing. Hanna M. Wallach. 2006. Topic modeling: Beyond bag-of-words. In Proceedings of the International Conference of Machine Learning. Jason Wolfe, Aria Haghighi, and Dan Klein. 2008. Fully distributed EM for very large datasets. In Proceedings of the International Conference of Machine Learning, pages 1184–1191. Frank Wood and Yee Whye Teh. 2009. A hierarchical nonparametric Bayesian approach to statistical language model domain adaptation. In Proceedings of the International Conference on Artificial Intelligence and Statistics, volume 12. Xinyan Xiao, Deyi Xiong, Min Zhang, Qun Liu, and Shouxun Lin. 2012. A topic similarity model for hierarchical phrase-based translation. In Proceedings of the Association for Computational Linguistics. Limin Yao, David Mimno, and Andrew McCallum. 2009. Efficient methods for topic model inference on streaming document collections. In Knowledge Discovery and Data Mining. Ke Zhai, Jordan Boyd-Graber, Nima Asadi, and Mohamad Alkhouja. 2012. Mr. LDA: A flexible large scale topic modeling package using variational inference in mapreduce. In Proceedings of World Wide Web Conference. Bing Zhao and Eric P. Xing. 2006. BiTAM: Bilingual topic admixture models for word alignment. In Proceedings of the Association for Computational Linguistics. 1176
2014
110
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1177–1187, Baltimore, Maryland, USA, June 23-25 2014. c⃝2014 Association for Computational Linguistics Low-Resource Semantic Role Labeling Matthew R. Gormley1 Margaret Mitchell2 Benjamin Van Durme1 Mark Dredze1 1Human Language Technology Center of Excellence Johns Hopkins University, Baltimore, MD 21211 2Microsoft Research Redmond, WA 98052 [email protected] | [email protected] | [email protected] | [email protected] Abstract We explore the extent to which highresource manual annotations such as treebanks are necessary for the task of semantic role labeling (SRL). We examine how performance changes without syntactic supervision, comparing both joint and pipelined methods to induce latent syntax. This work highlights a new application of unsupervised grammar induction and demonstrates several approaches to SRL in the absence of supervised syntax. Our best models obtain competitive results in the high-resource setting and state-ofthe-art results in the low resource setting, reaching 72.48% F1 averaged across languages. We release our code for this work along with a larger toolkit for specifying arbitrary graphical structure.1 1 Introduction The goal of semantic role labeling (SRL) is to identify predicates and arguments and label their semantic contribution in a sentence. Such labeling defines who did what to whom, when, where and how. For example, in the sentence “The kids ran the marathon”, ran assigns a role to kids to denote that they are the runners; and a role to marathon to denote that it is the race course. Models for SRL have increasingly come to rely on an array of NLP tools (e.g., parsers, lemmatizers) in order to obtain state-of-the-art results (Bj¨orkelund et al., 2009; Zhao et al., 2009). Each tool is typically trained on hand-annotated data, thus placing SRL at the end of a very highresource NLP pipeline. However, richly annotated data such as that provided in parsing treebanks is expensive to produce, and may be tied to specific domains (e.g., newswire). Many languages do 1http://www.cs.jhu.edu/˜mrg/software/ not have such supervised resources (low-resource languages), which makes exploring SRL crosslinguistically difficult. The problem of SRL for low-resource languages is an important one to solve, as solutions pave the way for a wide range of applications: Accurate identification of the semantic roles of entities is a critical step for any application sensitive to semantics, from information retrieval to machine translation to question answering. In this work, we explore models that minimize the need for high-resource supervision. We examine approaches in a joint setting where we marginalize over latent syntax to find the optimal semantic role assignment; and a pipeline setting where we first induce an unsupervised grammar. We find that the joint approach is a viable alternative for making reasonable semantic role predictions, outperforming the pipeline models. These models can be effectively trained with access to only SRL annotations, and mark a state-of-the-art contribution for low-resource SRL. To better understand the effect of the lowresource grammars and features used in these models, we further include comparisons with (1) models that use higher-resource versions of the same features; (2) state-of-the-art high resource models; and (3) previous work on low-resource grammar induction. In sum, this paper makes several experimental and modeling contributions, summarized below. Experimental contributions: • Comparison of pipeline and joint models for SRL. • Subtractive experiments that consider the removal of supervised data. • Analysis of the induced grammars in unsupervised, distantly-supervised, and joint training settings. 1177 Modeling contributions: • Simpler joint CRF for syntactic and semantic dependency parsing than previously reported. • New application of unsupervised grammar induction: low-resource SRL. • Constrained grammar induction using SRL for distant-supervision. • Use of Brown clusters in place of POS tags for low-resource SRL. The pipeline models are introduced in § 3.1 and jointly-trained models for syntactic and semantic dependencies (similar in form to Naradowsky et al. (2012)) are introduced in § 3.2. In the pipeline models, we develop a novel approach to unsupervised grammar induction and explore performance using SRL as distant supervision. The joint models use a non-loopy conditional random field (CRF) with a global factor constraining latent syntactic edge variables to form a tree. Efficient exact marginal inference is possible by embedding a dynamic programming algorithm within belief propagation as in Smith and Eisner (2008). Even at the expense of no dependency path features, the joint models best pipeline-trained models for state-of-the-art performance in the lowresource setting (§ 4.4). When the models have access to observed syntactic trees, they achieve near state-of-the-art accuracy in the high-resource setting on some languages (§ 4.3). Examining the learning curve of the joint and pipeline models in two languages demonstrates that a small number of labeled SRL examples may be essential for good end-task performance, but that the choice of a good model for grammar induction has an even greater impact. 2 Related Work Our work builds upon research in both semantic role labeling and unsupervised grammar induction (Klein and Manning, 2004; Spitkovsky et al., 2010a). Previous related approaches to semantic role labeling include joint classification of semantic arguments (Toutanova et al., 2005; Johansson and Nugues, 2008), latent syntax induction (Boxwell et al., 2011; Naradowsky et al., 2012), and feature engineering for SRL (Zhao et al., 2009; Bj¨orkelund et al., 2009). Toutanova et al. (2005) introduced one of the first joint approaches for SRL and demonstrated that a model that scores the full predicateargument structure of a parse tree could lead to significant error reduction over independent classifiers for each predicate-argument relation. Johansson and Nugues (2008) and Llu´ıs et al. (2013) extend this idea by coupling predictions of a dependency parser with predictions from a semantic role labeler. In the model from Johansson and Nugues (2008), the outputs from an SRL pipeline are reranked based on the full predicateargument structure that they form. The candidate set of syntactic-semantic structures is reranked using the probability of the syntactic tree and semantic structure. Llu´ıs et al. (2013) use a joint arcfactored model that predicts full syntactic paths along with predicate-argument structures via dual decomposition. Boxwell et al. (2011) and Naradowsky et al. (2012) observe that syntax may be treated as latent when a treebank is not available. Boxwell et al. (2011) describe a method for training a semantic role labeler by extracting features from a packed CCG parse chart, where the parse weights are given by a simple ruleset. Naradowsky et al. (2012) marginalize over latent syntactic dependency parses. Both Boxwell et al. (2011) and Naradowsky et al. (2012) suggest methods for SRL without supervised syntax, however, their features come largely from supervised resources. Even in their lowest resource setting, Boxwell et al. (2011) require an oracle CCG tag dictionary extracted from a treebank. Naradowsky et al. (2012) limit their exploration to a small set of basic features, and included high-resource supervision in the form of lemmas, POS tags, and morphology available from the CoNLL 2009 data. There has not yet been a comparison of techniques for SRL that do not rely on a syntactic treebank, and no exploration of probabilistic models for unsupervised grammar induction within an SRL pipeline that we have been able to find. Related work for the unsupervised learning of dependency structures separately from semantic roles primarily comes from Klein and Manning (2004), who introduced the Dependency Model with Valence (DMV). This is a robust generative model that uses a head-outward process over word classes, where heads generate arguments. Spitkovsky et al. (2010a) show that Viterbi (hard) EM training of the DMV with simple uniform initialization of the model parameters yields higher accuracy models than standard soft-EM 1178 Parsing Model Semantic Dependency Model Corpus Text Text Labeled With Semantic Roles Train Time, Constrained Grammar Induction: Observed Constraints Figure 1: Pipeline approach to SRL. In this simple pipeline, the first stage syntactically parses the corpus, and the second stage predicts semantic predicate-argument structure for each sentence using the labels of the first stage as features. In our low-resource pipelines, we assume that the syntactic parser is given no labeled parses—however, it may optionally utilize the semantic parses as distant supervision. Our experiments also consider ‘longer’ pipelines that include earlier stages: a morphological analyzer, POS tagger, lemmatizer. training. In Viterbi EM, the E-step finds the maximum likelihood corpus parse given the current model parameters. The M-step then finds the maximum likelihood parameters given the corpus parse. We utilize this approach to produce unsupervised syntactic features for the SRL task. Grammar induction work has further demonstrated that distant supervision in the form of ACE-style relations (Naseem and Barzilay, 2011) or HTML markup (Spitkovsky et al., 2010b) can lead to considerable gains. Recent work in fully unsupervised dependency parsing has supplanted these methods with even higher accuracies (Spitkovsky et al., 2013) by arranging optimizers into networks that suggest informed restarts based on previously identified local optima. We do not reimplement these approaches within the SRL pipeline here, but provide comparison of these methods against our grammar induction approach in isolation in § 4.5. In both pipeline and joint models, we use features adapted from state-of-the-art approaches to SRL. This includes Zhao et al. (2009) features, who use feature templates from combinations of word properties, syntactic positions including head and children, and semantic properties; and features from Bj¨orkelund et al. (2009), who utilize features on syntactic siblings and the dependency path concatenated with the direction of each edge. Features are described further in § 3.3. 3 Approaches We consider an array of models, varying: 1. Pipeline vs. joint training (Figures 1 and 2) 2. Types of supervision 3. The objective function at the level of syntax 3.1 Unsupervised Syntax in the Pipeline Typical SRL systems are trained following a pipeline where the first component is trained on supervised data, and each subsequent component is trained using the 1-best output of the previous components. A typical pipeline consists of a POS tagger, dependency parser, and semantic role labeler. In this section, we introduce pipelines that remove the need for a supervised tagger and parser by training in an unsupervised and distantly supervised fashion. Brown Clusters We use fully unsupervised Brown clusters (Brown et al., 1992) in place of POS tags. Brown clusters have been used to good effect for various NLP tasks such as named entity recognition (Miller et al., 2004) and dependency parsing (Koo et al., 2008; Spitkovsky et al., 2011). The clusters are formed by a greedy hierachical clustering algorithm that finds an assignment of words to classes by maximizing the likelihood of the training data under a latent-class bigram model. Each word type is assigned to a finegrained cluster at a leaf of the hierarchy of clusters. Each cluster can be uniquely identified by the path from the root cluster to that leaf. Representing this path as a bit-string (with 1 indicating a left and 0 indicating a right child) allows a simple coarsening of the clusters by truncating the bit-strings. We train 1000 Brown clusters for each of the CoNLL2009 languages on Wikipedia text.2 Unsupervised Grammar Induction Our first method for grammar induction is fully unsupervised Viterbi EM training of the Dependency Model with Valence (DMV) (Klein and Manning, 2004), with uniform initialization of the model parameters. We define the DMV such that it generates sequences of word classes: either POS tags or Brown clusters as in Spitkovsky et al. (2011). The DMV is a simple generative model for projective dependency trees. Children are generated recursively for each node. Conditioned on the parent class, the direction (right or left), and the current valence (first child or not), a coin is flipped to decide whether to generate another child; the distribution over child classes is conditioned on only the parent class and direction. 2The Wikipedia text was tokenized for Polyglot (Al-Rfou’ et al., 2013): http://bit.ly/embeddings 1179 Constrained Grammar Induction Our second method, which we will refer to as DMV+C, induces grammar in a distantly supervised fashion by using a constrained parser in the E-step of Viterbi EM. Since the parser is part of a pipeline, we constrain it to respect the downstream SRL annotations during training. At test time, the parser is unconstrained. Dependency-based semantic role labeling can be described as a simple structured prediction problem: the predicted structure is a labeled directed graph, where nodes correspond to words in the sentence. Each directed edge indicates that there is a predicate-argument relationship between the two words; the parent is the predicate and the child the argument. The label on the edge indicates the type of semantic relationship. Unlike syntactic dependency parsing, the graph is not required to be a tree, nor even a connected graph. Self-loops and crossing arcs are permitted. The constrained syntactic DMV parser treats the semantic graph as observed, and constrains the syntactic parent to be chosen from one of the semantic parents, if there are any. In some cases, imposing this constraint would not permit any projective dependency parses—in this case, we ignore the semantic constraint for that sentence. We parse with the CKY algorithm (Younger, 1967; Aho and Ullman, 1972) by utilizing a PCFG corresponding to the DMV (Cohn et al., 2010). Each chart cell allows only non-terminals compatible with the constrained sets. This can be viewed as a variation of Pereira and Schabes (1992). Semantic Dependency Model As described above, semantic role labeling can be cast as a structured prediction problem where the structure is a labeled semantic dependency graph. We define a conditional random field (CRF) (Lafferty et al., 2001) for this task. Because each word in a sentence may be in a semantic relationship with any other word (including itself), a sentence of length n has n2 possible edges. We define a single L+1-ary variable for each edge, whose value can be any of L semantic labels or a special label indicating there is no predicate-argument relationship between the two words. In this way, we jointly perform identification (determining whether a semantic relationship exists) and classification (determining the semantic label). This use of an L+1ary variable is in contrast to the model of Naradowsky et al. (2012), which used a more complex DEPTREE Dep 1,1 Role 1,1 Role 1,2 Role 1,3 Role n,n Dep 1,2 Dep 1,3 Dep n,n ... ... Figure 2: Factor graph for the joint syntactic/semantic dependency parsing model. set of binary variables and required a constraint factor permitting AT-MOST-ONE. We include one unary factor for each variable. We optionally include additional variables that perform word sense disambiguation for each predicate. Each has a unary factor and is completely disconnected from the semantic edge (similar to Naradowsky et al. (2012)). These variables range over all the predicate senses observed in the training data for the lemma of that predicate. 3.2 Joint Syntactic and Semantic Parsing Model In Section 3.1, we introduced pipeline-trained models for SRL, which used grammar induction to predict unlabeled syntactic parses. In this section, we define a simple model for joint syntactic and semantic dependency parsing. This model extends the CRF model in Section 3.1 to include the projective syntactic dependency parse for a sentence. This is done by including an additional n2 binary variables that indicate whether or not a directed syntactic dependency edge exists between a pair of words in the sentence. Unlike the semantic dependencies, these syntactic variables must be coupled so that they produce a projective dependency parse; this requires an additional global constraint factor to ensure that this is the case (Smith and Eisner, 2008). The constraint factor touches all n2 syntactic-edge variables, and multiplies in 1.0 if they form a projective dependency parse, and 0.0 otherwise. We couple each syntactic edge variable to its semantic edge variable with a binary factor. Figure 2 shows the factor graph for this joint model. Note that our factor graph does not contain any loops, thereby permitting efficient exact marginal inference just as in Naradowsky et al. (2012). We 1180 Property Possible values 1 word form all word forms 2 lower case word form all lower-case forms 3 5-char word form prefixes all 5-char form prefixes 4 capitalization True, False 5 top-800 word form top-800 word forms 6 brown cluster 000, 1100, 010110001, ... 7 brown cluster, length 5 length 5 prefixes of brown clusters 8 lemma all word lemmas 9 POS tag NNP, CD, JJ, DT, ... 10 morphological features Gender, Case, Number, ... (different across languages) 11 dependency label SBJ, NMOD, LOC, ... 12 edge direction Up, Down Table 1: Word and edge properties in templates. i, i-1, i+1 noFarChildren(wi) linePath(wp, wc) parent(wi) rightNearSib(wi) depPath(wp, wc) allChildren(wi) leftNearSib(wi) depPath(wp, wlca) rightNearChild(wi) firstVSupp(wi) depPath(wc, wlca) rightFarChild(wi) lastVSupp(wi) depPath(wlca, wroot) leftNearChild(wi) firstNSupp(wi) leftFarChild(wi) lastNSupp(wi) Table 2: Word positions used in templates. Based on current word position (i), positions related to current word wi, possible parent, child (wp, wc), lowest common ancestor between parent/child (wlca), and syntactic root (wroot). train our CRF models by maximizing conditional log-likelihood using stochastic gradient descent with an adaptive learning rate (AdaGrad) (Duchi et al., 2011) over mini-batches. The unary and binary factors are defined with exponential family potentials. In the next section, we consider binary features of the observations (the sentence and labels from previous pipeline stages) which are conjoined with the state of the variables in the factor. 3.3 Features for CRF Models Our feature design stems from two key ideas. First, for SRL, it has been observed that feature bigrams (the concatenation of simple features such as a predicate’s POS tag and an argument’s word) are important for state-of-the-art (Zhao et al., 2009; Bj¨orkelund et al., 2009). Second, for syntactic dependency parsing, combining Brown cluster features with word forms or POS tags yields high accuracy even with little training data (Koo et al., 2008). We create binary indicator features for each model using feature templates. Our feature template definitions build from those used by the top performing systems in the CoNLL-2009 Shared Task, Zhao et al. (2009) and Bj¨orkelund et al. (2009) and from features in syntactic dependency parsing (McDonald et al., 2005; Koo et al., 2008). Template Possible values relative position before, after, on distance, continuity Z+ binned distance > 2, 5, 10, 20, 30, or 40 geneological relationship parent, child, ancestor, descendant path-grams the NN went Table 3: Additional standalone templates. Template Creation Feature templates are defined over triples of ⟨property, positions, order⟩. Properties, listed in Table 1, are extracted from word positions within the sentence, shown in Table 2. Single positions for a word wi include its syntactic parent, its leftmost farthest child (leftFarChild), its rightmost nearest sibling (rightNearSib), etc. Following Zhao et al. (2009), we include the notion of verb and noun supports and sections of the dependency path. Also following Zhao et al. (2009), properties from a set of positions can be put together in three possible orders: as the given sequence, as a sorted list of unique strings, and removing all duplicated neighbored strings. We consider both template unigrams and bigrams, combining two templates in sequence. Additional templates we include are the relative position (Bj¨orkelund et al., 2009), geneological relationship, distance (Zhao et al., 2009), and binned distance (Koo et al., 2008) between two words in the path. From Llu´ıs et al. (2013), we use 1, 2, 3gram path features of words/POS tags (path-grams), and the number of non-consecutive token pairs in a predicate-argument path (continuity). 3.4 Feature Selection Constructing all feature template unigrams and bigrams would yield an unwieldy number of features. We therefore determine the top N template bigrams for a dataset and factor a according to an information gain measure (Martins et al., 2011): IGa,m = X f∈Tm X xa p(f, xa) log2 p(f, xa) p(f)p(xa) where Tm is the mth feature template, f is a particular instantiation of that template, and xa is an assignment to the variables in factor a. The probabilities are empirical estimates computed from the training data. This is simply the mutual information of the feature template instantiation with the variable assignment. This filtering approach was treated as a simple baseline in Martins et al. (2011) to contrast with increasingly popular gradient based regularization approaches. Unlike the gradient based ap1181 proaches, this filtering approach easily scales to many features since we can decompose the memory usage over feature templates. As an additional speedup, we reduce the dimensionality of our feature space to 1 million for each clique using a common trick referred to as feature hashing (Weinberger et al., 2009): we map each feature instantiation to an integer using a hash function3 modulo the desired dimentionality. 4 Experiments We are interested in the effects of varied supervision using pipeline and joint training for SRL. To compare to prior work (i.e., submissions to the CoNLL-2009 Shared Task), we also consider the joint task of semantic role labeling and predicate sense disambiguation. Our experiments are subtractive, beginning with all supervision available and then successively removing (a) dependency syntax, (b) morphological features, (c) POS tags, and (d) lemmas. Dependency syntax is the most expensive and difficult to obtain of these various forms of supervision. We explore the importance of both the labels and structure, and what quantity of supervision is useful. 4.1 Data The CoNLL-2009 Shared Task (Hajiˇc et al., 2009) dataset contains POS tags, lemmas, morphological features, syntactic dependencies, predicate senses, and semantic roles annotations for 7 languages: Catalan, Chinese, Czech, English, German, Japanese,4 Spanish. The CoNLL-2005 and -2008 Shared Task datasets provide English SRL annotation, and for cross dataset comparability we consider only verbal predicates (more details in § 4.4). To compare with prior approaches that use semantic supervision for grammar induction, we utilize Section 23 of the WSJ portion of the Penn Treebank (Marcus et al., 1993). 4.2 Feature Template Sets Our primary feature set IGC consists of 127 template unigrams that emphasize coarse properties (i.e., properties 7, 9, and 11 in Table 1). We also explore the 31 template unigrams5 IGB described 3To reduce hash collisions, We use MurmurHash v3 https://code.google.com/p/smhasher. 4We do not report results on Japanese as that data was only made freely available to researchers that competed in CoNLL 2009. 5Because we do not include a binary factor between predicate sense and semantic role, we do not include sense as a by Bj¨orkelund et al. (2009). Each of IGC and IGB also include 32 template bigrams selected by information gain on 1000 sentences—we select a different set of template bigrams for each dataset. We compare against the language-specific feature sets detailed in the literature on high-resource top-performing SRL systems: From Bj¨orkelund et al. (2009), these are feature sets for German, English, Spanish and Chinese, obtained by weeks of forward selection (Bde,en,es,zh); and from Zhao et al. (2009), these are features for Catalan Zca.6 4.3 High-resource SRL We first compare our models trained as a pipeline, using all available supervision (syntax, morphology, POS tags, lemmas) from the CoNLL-2009 data. Table 4(a) shows the results of our model with gold syntax and a richer feature set than that of Naradowsky et al. (2012), which only looked at whether a syntactic dependency edge was present. This highlights an important advantage of the pipeline trained model: the features can consider any part of the syntax (e.g., arbitrary subtrees), whereas the joint model is limited to those features over which it can efficiently marginalize (e.g., short dependency paths). This holds true even in the pipeline setting where no syntactic supervision is available. Table 4(b) contrasts our high-resource results for the task of SRL and sense disambiguation with the top systems in the CoNLL-2009 Shared Task, giving further insight into the performance of the simple information gain feature selection technique. With supervised syntax, our simple information gain feature selection technique (§ 3.4) performs admirably. However, the original unigram Bj¨orkelund features (Bde,en,es,zh), which were tuned for a high-resource model, obtain higher F1 than our information gain set using the same features in unigram and bigram templates (IGB). This suggests that further work on feature selection may improve the results. We find that IGB obtain higher F1 than the original Bj¨orkelund feature sets (Bde,en,es,zh) in the lowresource pipeline setting with constrained grammar induction (DMV+C). feature for argument prediction. 6This covers all CoNLL languages but Czech, where feature sets were not made publicly available in either work. In Czech, we disallowed template bigrams involving path-grams. 1182 (a) (b) (c) SRL Approach Feature Set Dep. Parser Avg. ca cs de en es zh Pipeline IGC Gold 84.98 84.97 87.65 79.14 86.54 84.22 87.35 Pipeline IGB Gold 84.74 85.15 86.64 79.50 85.77 84.40 86.95 Naradowsky et al. (2012) Gold 72.73 69.59 74.84 66.49 78.55 68.93 77.97 Bj¨orkelund et al. (2009) Supervised 81.55 80.01 85.41 79.71 85.63 79.91 78.60 Zhao et al. (2009) Supervised 80.85 80.32 85.19 75.99 85.44 80.46 77.72 Pipeline IGC Supervised 78.03 76.24 83.34 74.19 81.96 76.12 76.35 Pipeline Zca Supervised *77.62 77.62 — — — — — Pipeline Bde,en,es,zh Supervised *76.49 — — 72.17 81.15 76.65 75.99 Pipeline IGB Supervised 75.68 74.59 81.61 69.08 78.86 74.51 75.44 Joint IGC Marginalized 72.48 71.35 81.03 65.15 76.16 71.03 70.14 Joint IGB Marginalized 72.40 71.55 80.04 64.80 75.57 71.21 71.21 Naradowsky et al. (2012) Marginalized 71.27 67.99 73.16 67.26 76.12 66.74 76.32 Pipeline IGC DMV+C (bc) 70.08 68.21 79.63 62.25 73.81 68.73 67.86 Pipeline Zca DMV+C (bc) *69.67 69.67 — — — — — Pipeline IGC DMV (bc) 69.26 68.04 79.58 58.47 74.78 68.36 66.35 Pipeline IGB DMV (bc) 66.81 63.31 77.38 59.91 72.02 65.96 62.28 Pipeline IGB DMV+C (bc) 65.61 61.89 77.48 58.97 69.11 63.31 62.92 Pipeline Bde,en,es,zh DMV+C (bc) *63.06 — — 57.75 68.32 63.70 62.45 Table 4: Test F1 for SRL and sense disambiguation on CoNLL’09 in high-resource and low-resource settings: we study (a) gold syntax, (b) supervised syntax, and (c) unsupervised syntax. Results are ranked by F1 with bold numbers indicating the best F1 for a language and level of supervision. *Indicates partial averages for the language-specific feature sets (Zca and Bde,en,es,zh), for which we show results only on the languages for which the sets were publicly available. train test 2008 heads 2005 spans 2005 spans (oracle tree) ✓ □PRY’08 2005 spans 84.32 79.44 □B’11 (tdc) — 71.5 □B’11 (td) — 65.0 ✓ □JN’08 2008 heads 85.93 79.90 □Joint, IGC 72.9 35.0 72.0 □Joint, IGB 67.3 37.8 67.1 Table 5: F1 for SRL approaches (without sense disambiguation) in matched and mismatched train/test settings for CoNLL 2005 span and 2008 head supervision. We contrast low-resource (□) and high-resource settings (✓ □), where latter uses a treebank. See § 4.4 for caveats to this comparison. 4.4 Low-Resource SRL CoNLL-2009 Table 4(c) includes results for our low-resource approaches and Naradowsky et al. (2012) on predicting semantic roles as well as sense. In the low-resource setting of the CoNLL2009 Shared task without syntactic supervision, our joint model (Joint) with marginalized syntax obtains state-of-the-art results with features IGC described in § 4.2. This model outperforms prior work (Naradowsky et al., 2012) and our pipeline model (Pipeline) with contrained (DMV+C) and unconstrained grammar induction (DMV) trained on brown clusters (bc). In the low-resource setting, training and decoding times for the pipeline and joint methods are similar as computation time tends to be dominated by feature extraction. These results begin to answer a key research question in this work: The joint models outperform the pipeline models in the low-resource setting. This holds even when using the same feature selection process. Further, the best-performing low-resource features found in this work are those based on coarse feature templates and selected by information gain. Templates for these features generalize well to the high-resource setting. However, analysis of the induced grammars in the pipeline setting suggests that the book is not closed on the issue. We return to this in § 4.5. CoNLL-2008, -2005 To finish out comparisons with state-of-the-art SRL, we contrast our approach with that of Boxwell et al. (2011), who evaluate on SRL in isolation (without sense disambiguation, as in CoNLL-2009). They report results on Prop-CCGbank (Boxwell and White, 2008), which uses the same training/testing splits as the CoNLL-2005 Shared Task. Their results are therefore loosely7 comparable to results on the CoNLL2005 dataset, which we can compare here. There is an additional complication in comparing SRL approaches directly: The CoNLL2005 dataset defines arguments as spans instead of 7The comparison is imperfect for two reasons: first, the CCGBank contains only 99.44% of the original PTB sentences (Hockenmaier and Steedman, 2007); second, because PropBank was annotated over CFGs, after converting to CCG only 99.977% of the argument spans were exact matches (Boxwell and White, 2008). However, this comparison was adopted by Boxwell et al. (2011), so we use it here. 1183 heads, which runs counter to our head-based syntactic representation. This creates a mismatched train/test scenario: we must train our model to predict argument heads, but then test on our models ability to predict argument spans.8 We therefore train our models on the CoNLL-2008 argument heads,9 and post-process and convert from heads to spans using the conversion algorithm available from Johansson and Nugues (2008).10 The heads are either from an MBR tree or an oracle tree. This gives Boxwell et al. (2011) the advantage, since our syntactic dependency parses are optimized to pick out semantic argument heads, not spans. Table 5 presents our results. Boxwell et al. (2011) (B’11) uses additional supervision in the form of a CCG tag dictionary derived from supervised data with (tdc) and without (tc) a cutoff. Our model does very poorly on the ’05 spanbased evaluation because the constituent bracketing of the marginalized trees are inaccurate. This is elucidated by instead evaluating on the oracle spans, where our F1 scores are higher than Boxwell et al. (2011). We also contrast with relavant high-resource methods with span/head conversions from Johansson and Nugues (2008): Punyakanok et al. (2008) (PRY’08) and Johansson and Nugues (2008) (JN’08). Subtractive Study In our subsequent experiments, we study the effectiveness of our models as the available supervision is decreased. We incrementally remove dependency syntax, morphological features, POS tags, then lemmas. For these experiments, we utilize the coarse-grained feature set (IGC), which includes Brown clusters. Across languages, we find the largest drop in F1 when we remove POS tags; and we find a gain in F1 when we remove lemmas. This indicates that lemmas, which are a high-resource annotation, may not provide a significant benefit for this task. The effect of removing morphological features is different across languages, with little change in performance for Catalan and Spanish, 8We were unable to obtain the system output of Boxwell et al. (2011) in order to convert their spans to dependencies and evaluate the other mismatched train/test setting. 9CoNLL-2005, -2008, and -2009 were derived from PropBank and share the same source text; -2008 and -2009 use argument heads. 10Specifically, we use their Algorithm 2, which produces the span dominated by each argument, with special handling of the case when the argument head dominates that of the predicate. Also following Johansson and Nugues (2008), we recover the ’05 sentences missing from the ’08 evaluation set. Rem #FT ca de es – 127+32 74.46 72.62 74.23 Dep 40+32 67.43 64.24 67.18 Mor 30+32 67.84 59.78 66.94 POS 23+32 64.40 54.68 62.71 Lem 21+32 64.85 54.89 63.80 Table 6: Subtractive experiments. Each row contains the F1 for SRL only (without sense disambiguation) where the supervision type of that row and all above it have been removed. Removed supervision types (Rem) are: syntactic dependencies (Dep), morphology (Mor), POS tags (POS), and lemmas (Lem). #FT indicates the number of feature templates used (unigrams+bigrams). 20 30 40 50 60 70 0 20000 40000 60000 Number of Training Sentences Labeled F1 Language / Dependency Parser Catalan / Marginalized Catalan / DMV+C German / Marginalized German / DMV+C Figure 3: Learning curve for semantic dependency supervision in Catalan and German. F1 of SRL only (without sense disambiguation) shown as the number of training sentences is increased. but a drop in performance for German. This may reflect a difference between the languages, or may reflect the difference between the annotation of the languages: both the Catalan and Spanish data originated from the Ancora project,11 while the German data came from another source. Figure 3 contains the learning curve for SRL supervision in our lowest resource setting for two example languages, Catalan and German. This shows how F1 of SRL changes as we adjust the number of training examples. We find that the joint training approach to grammar induction yields consistently higher SRL performance than its distantly supervised counterpart. 4.5 Analysis of Grammar Induction Table 7 shows grammar induction accuracy in low-resource settings. We find that the gap between the supervised parser and the unsupervised methods is quite large, despite the reasonable accuracy both methods achieve for the SRL end task. 11http://clic.ub.edu/corpus/ancora 1184 Dependency Parser Avg. ca cs de en es zh Supervised* 87.1 89.4 85.3 89.6 88.4 89.2 80.7 DMV (pos) 30.2 45.3 22.7 20.9 32.9 41.9 17.2 DMV (bc) 22.1 18.8 32.8 19.6 22.4 20.5 18.6 DMV+C (pos) 37.5 50.2 34.9 21.5 36.9 49.8 32.0 DMV+C (bc) 40.2 46.3 37.5 28.7 40.6 50.4 37.5 Marginal, IGC 43.8 50.3 45.8 27.2 44.2 46.3 48.5 Marginal, IGB 50.2 52.4 43.4 41.3 52.6 55.2 56.2 Table 7: Unlabeled directed dependency accuracy on CoNLL’09 test set in low-resource settings. DMV models are trained on either POS tags (pos) or Brown clusters (bc). *Indicates the supervised parser outputs provided by the CoNLL’09 Shared Task. WSJ∞ Distant Supervision SAJM’10 44.8 none SAJ’13 64.4 none SJA’10 50.4 HTML NB’11 59.4 ACE05 DMV (bc) 24.8 none DMV+C (bc) 44.8 SRL Marginalized, IGC 48.8 SRL Marginalized, IGB 58.9 SRL Table 8: Comparison of grammar induction approaches. We contrast the DMV trained with Viterbi EM+uniform initialization (DMV), our constrained DMV (DMV+C), and our model’s MBR decoding of latent syntax (Marginalized) with other recent work: Spitkovsky et al. (2010a) (SAJM’10), Spitkovsky et al. (2010b) (SJA’10), Naseem and Barzilay (2011) (NB’11), and the CS model of Spitkovsky et al. (2013) (SAJ’13). This suggests that refining the low-resource grammar induction methods may lead to gains in SRL. Interestingly, the marginalized grammars best the DMV grammar induction method; however, this difference is less pronounced when the DMV is constrained using SRL labels as distant supervision. This could indicate that a better model for grammar induction would result in better performance for SRL. We therefore turn to an analysis of other approaches to grammar induction in Table 8, evaluated on the Penn Treebank. We contrast with methods using distant supervision (Naseem and Barzilay, 2011; Spitkovsky et al., 2010b) and fully unsupervised dependency parsing (Spitkovsky et al., 2013). Following prior work, we exclude punctuation from evaluation and convert the constituency trees to dependencies.12 The approach from Spitkovsky et al. (2013) 12Naseem and Barzilay (2011) and our results use the Penn converter (Pierre and Heiki-Jaan, 2007). Spitkovsky et al. (2010b; 2013) use Collins (1999) head percolation rules. (SAJ’13) outperforms all other approaches, including our marginalized settings. We therefore may be able to achieve further gains in the pipeline model by considering better models of latent syntax, or better search techniques that break out of local optima. Similarly, improving the nonconvex optimization of our latent-variable CRF (Marginalized) may offer further gains. 5 Discussion and Future Work We have compared various approaches for lowresource semantic role labeling at the state-of-theart level. We find that we can outperform prior work in the low-resource setting by coupling the selection of feature templates based on information gain with a joint model that marginalizes over latent syntax. We utilize unlabeled data in both generative and discriminative models for dependency syntax and in generative word clustering. Our discriminative joint models treat latent syntax as a structuredfeature to be optimized for the end-task of SRL, while our other grammar induction techniques optimize for unlabeled data likelihood—optionally with distant supervision. We observe that careful use of these unlabeled data resources can improve performance on the end task. Our subtractive experiments suggest that lemma annotations, a high-resource annotation, may not provide a large benefit for SRL. Our grammar induction analysis indicates that relatively low accuracy can still result in reasonable SRL predictions; still, the models do not outperform those that use supervised syntax, and we aim to explore how well the pipeline models in particular improve when we apply higher accuracy unsupervised grammar induction techniques. We have utilized well studied datasets in order to best understand the quality of our models relative to prior work. In future work, we hope to explore the effectiveness of our approaches on truly low resource settings by using crowdsourcing to develop semantic role datasets in other languages and domains. Acknowledgments We thank Richard Johansson, Dennis Mehay, and Stephen Boxwell for help with data. We also thank Jason Naradowsky, Jason Eisner, and anonymous reviewers for comments on the paper. 1185 References Alfred V. Aho and Jeffrey D. Ullman. 1972. The Theory of Parsing, Translation, and Compiling. Prentice-Hall, Inc. Rami Al-Rfou’, Bryan Perozzi, and Steven Skiena. 2013. Polyglot: Distributed word representations for multilingual NLP. In Proceedings of the 17th Conference on Computational Natural Language Learning (CoNLL 2013). Association for Computational Linguistics. Anders Bj¨orkelund, Love Hafdell, and Pierre Nugues. 2009. Multilingual semantic role labeling. In Proceedings of the 13th Conference on Computational Natural Language Learning (CoNLL 2009): Shared Task. Association for Computational Linguistics. Stephen Boxwell and Michael White. 2008. Projecting propbank roles onto the CCGbank. In Proceedings of the International Conference on Language Resources and Evaluation (LREC 2008). European Language Resources Association. Stephen Boxwell, Chris Brew, Jason Baldridge, Dennis Mehay, and Sujith Ravi. 2011. Semantic role labeling without treebanks? In Proceedings of the 5th International Joint Conference on Natural Language Processing (IJCNLP). Asian Federation of Natural Language Processing. Peter F. Brown, Peter V. Desouza, Robert L. Mercer, Vincent J. Della Pietra, and Jenifer C. Lai. 1992. Class-based n-gram models of natural language. Computational linguistics, 18(4). Trevor Cohn, Phil Blunsom, and Sharon Goldwater. 2010. Inducing tree-substitution grammars. The Journal of Machine Learning Research, 11. Michael Collins. 1999. Head-driven statistical models for natural language parsing. Ph.D. thesis, University of Pennsylvania. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research, 12. Jan Hajiˇc, Massimiliano Ciaramita, Richard Johansson, Daisuke Kawahara, Maria Ant`onia Mart´ı, Llu´ıs M`arquez, Adam Meyers, Joakim Nivre, Sebastian Pad´o, Jan ˇStˇep´anek, Pavel Straˇn´ak, Mihai Surdeanu, Nianwen Xue, and Yi Zhang. 2009. The conll-2009 shared task: Syntactic and semantic dependencies in multiple languages. In Proceedings of the 13th Conference on Computational Natural Language Learning (CoNLL 2009): Shared Task. Association for Computational Linguistics. Julia Hockenmaier and Mark Steedman. 2007. CCGbank: a corpus of CCG derivations and dependency structures extracted from the Penn Treebank. Computational Linguistics, 33(3). Richard Johansson and Pierre Nugues. 2008. Dependency-based semantic role labeling of PropBank. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing (EMNLP 2008). Association for Computational Linguistics. Dan Klein and Christopher Manning. 2004. CorpusBased induction of syntactic structure: Models of dependency and constituency. In Proceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL 2004). Association for Computational Linguistics. Terry Koo, Xavier Carreras, and Michael Collins. 2008. Simple semi-supervised dependency parsing. In Proceedings of ACL-08: HLT. Association for Computational Linguistics. J. Lafferty, A. McCallum, and F. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the 18th International Conference on Machine Learning (ICML 2001). Morgan Kaufmann. Xavier Llu´ıs, Xavier Carreras, and Llu´ıs M`arquez. 2013. Joint arc-factored parsing of syntactic and semantic dependencies. Transactions of the Association for Computational Linguistics (TACL). Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of english: The Penn Treebank. Computational linguistics, 19(2). Andre Martins, Noah Smith, Mario Figueiredo, and Pedro Aguiar. 2011. Structured sparsity in structured prediction. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing (EMNLP 2011). Association for Computational Linguistics. R. McDonald, K. Crammer, and F. Pereira. 2005. Online large-margin training of dependency parsers. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL 2005). Association for Computational Linguistics. Scott Miller, Jethran Guinness, and Alex Zamanian. 2004. Name tagging with word clusters and discriminative training. In Susan Dumais, Daniel Marcu, and Salim Roukos, editors, HLT-NAACL 2004: Main Proceedings. Association for Computational Linguistics. Jason Naradowsky, Sebastian Riedel, and David Smith. 2012. Improving NLP through marginalization of hidden syntactic structure. In Proceedings of the 2012 Conference on Empirical Methods in Natural Language Processing (EMNLP 2012). Association for Computational Linguistics. Tahira Naseem and Regina Barzilay. 2011. Using semantic cues to learn syntax. In Proceedings of the 25th AAAI Conference on Artificial Intelligence (AAAI 2011). AAAI Press. 1186 Fernando Pereira and Yves Schabes. 1992. Insideoutside reestimation from partially bracketed corpora. In Proceedings of the 30th Annual Meeting of the Association for Computational Linguistics (ACL 1992). Nugues Pierre and Kalep Heiki-Jaan. 2007. Extended constituent-to-dependency conversion for english. NODALIDA 2007 Proceedings. Vasin Punyakanok, Dan Roth, and Wen-tau Yih. 2008. The importance of syntactic parsing and inference in semantic role labeling. Computational Linguistics, 34(2). David A. Smith and Jason Eisner. 2008. Dependency parsing by belief propagation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP 2008). Association for Computational Linguistics. Valentin I. Spitkovsky, Hiyan Alshawi, Daniel Jurafsky, and Christopher D Manning. 2010a. Viterbi training improves unsupervised dependency parsing. In Proceedings of the 14th Conference on Computational Natural Language Learning (CoNLL 2010). Association for Computational Linguistics. Valentin I. Spitkovsky, Daniel Jurafsky, and Hiyan Alshawi. 2010b. Profiting from mark-up: Hyper-text annotations for guided parsing. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL 2010). Association for Computational Linguistics. Valentin I. Spitkovsky, Hiyan Alshawi, Angel X. Chang, and Daniel Jurafsky. 2011. Unsupervised dependency parsing without gold part-of-speech tags. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing (EMNLP 2011). Association for Computational Linguistics. Valentin I. Spitkovsky, Hiyan Alshawi, and Daniel Jurafsky. 2013. Breaking out of local optima with count transforms and model recombination: A study in grammar induction. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing (EMNLP 2013). Association for Computational Linguistics. Kristina Toutanova, Aria Haghighi, and Christopher Manning. 2005. Joint learning improves semantic role labeling. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics (ACL 2005). Association for Computational Linguistics. Kilian Weinberger, Anirban Dasgupta, John Langford, Alex Smola, and Josh Attenberg. 2009. Feature hashing for large scale multitask learning. In L´eon Bottou and Michael Littman, editors, Proceedings of the 26th Annual International Conference on Machine Learning (ICML 2009). Omnipress. Daniel H. Younger. 1967. Recognition and parsing of context-free languages in time n3. Information and Control, 10(2). Hai Zhao, Wenliang Chen, Chunyu Kity, and Guodong Zhou. 2009. Multilingual dependency learning: A huge feature engineering method to semantic dependency parsing. In Proceedings of the 13th Conference on Computational Natural Language Learning (CoNLL 2009): Shared Task. Association for Computational Linguistics. 1187
2014
111